Sarah Friar, Chief Financial Officer of OpenAI, visited the Oxford Union today. I had the opportunity to attend the Meet and Greet, though regrettably, as Friar arrived late, it consisted solely of the least meaningful component, a photo session, while the one-to-one Q&A was cancelled. Consequently, I did not have the chance to ask about her views on OpenAI’s business model. Some relevant points, however, emerged later during her address and the Chamber discussion. As recording was prohibited inside the Chamber, this memo is reconstructed largely from memory and may contain minor inaccuracies.

Friar’s introduction to OpenAI contained little that was novel. To my mind, her remarks can be summarised under three themes: social responsibility, ecosystem, and AGI. On social responsibility, she stressed that abundant intelligence for all would be OpenAI’s principal contribution to society, and conveyed this message effectively through a polished and moving narrative. On ecosystem development, she highlighted recent collaborations with NVIDIA, AMD, and Oracle, as well as partnerships with Oxford (providing ChatGPT Edu to all students free of charge) and with various governments. The most intriguing portion of her talk, however, concerned OpenAI’s fixation on AGI. Friar asserted that the company’s current trajectory towards AGI means we are not in a bubble. OpenAI evidently continues to believe that generative AI is the path to AGI and is “all-in” on this conviction, expecting its arrival in the near future, unsurprising given that OpenAI’s non-profit parent organisation was founded around this very vision. She made one particularly striking observation: according to Friar, today’s compute shortage stems from decision-makers three years ago failing to anticipate the exponential growth of AI’s demand for computational power; now, she argued, those same people are filled with regret. To avoid repeating their mistake, she believes we must all-in on investing in compute infrastructure today.
I find several of these views questionable. OpenAI’s social contribution is indeed undeniable: AI has tangibly transformed everyday life, and the fact that most people can now access it freely is, in itself, a substantial public good. Friar noted that 95% of ChatGPT users do not pay for the product, presenting this as evidence of OpenAI’s generosity. Yet from a commercial standpoint, this figure is deeply concerning. If consumers display little willingness to pay, how can OpenAI sustain profitability? Should AI prove to be a commodity business, it would inevitably face low margins, precisely the fear such a statistic reinforces.
Regarding the ecosystem narrative, I suspect it functions less as a genuine collaborative network and more as a mechanism to mitigate systemic risk, a “chained-together fleet” designed to ensure no one sinks alone. The most persuasive argument I have encountered for why AI valuations can soar without triggering panic is that the Federal Reserve or the U.S. government would never permit an AI crash, intervening with rescue measures if necessary. Yet if this is true, it implies that the risk-pricing mechanism at the firm level has collapsed entirely — valuations are simply too high. I hope my interpretation of OpenAI’s ecosystem strategy is overly cynical.
On AGI, I remain sceptical of how many truly believe that generative AI will lead us there. Neither agents nor world models have yet demonstrated genuine breakthroughs, and the scaling laws that once underpinned optimism now appear to falter. In my view, AGI is more likely to emerge from other technological routes. Generative AI remains, at heart, a vast statistical pattern generator, highly proficient at predicting “the next word”, but devoid of understanding. It excels as an information retriever in the text-saturated era of mobile internet, but it is not intelligence. Friar suggested that exceptional investors recognise AGI’s potential and therefore invest heavily in OpenAI, seeing in it an extraordinary terminal value. One cannot help wondering whether this resembles a modern “emperor’s new clothes” phenomenon.
Her reasoning about compute shortages also deserves scrutiny. Both the decision-makers who underestimated demand three years ago and those now advocating total commitment to compute share the same intellectual flaw: linear extrapolation. The latter may appear more “forward-looking”, yet both infer the future merely by scaling past trends rather than understanding the underlying mechanisms of innovation, technological constraints, and institutional limits. The earlier group assumed linear growth and thus misjudged AI’s expansion; the current group assumes the exponential surge will persist unchanged. Both approaches are passive projections of historical curves rather than active analyses of dynamic feedback among technology, capital, and society. Truly insightful decisions must rest on an understanding of innovation logic, structural constraints, and path dependence, not on multiplying the past by a constant.
Equally revealing were the subjects Friar did not address. She avoided questions about circular financing, raising doubts about how OpenAI internally perceives such ecosystem-building funding structures, are they deliberately opaque?
Her discussion of the business model was similarly vague. Friar mentioned that OpenAI’s revenues will derive from three sources: subscriptions and tokens, and advertising. Advertising, she said, would include e-commerce-style placements (merchants promoting products), an app-store-like marketplace, and a browser business. What she omitted is telling: OpenAI possesses no evident advantage in any of these domains, and early attempts have been largely unsuccessful.
The subscription and token lines face intense competition and low margins. Many firms in Silicon Valley, seeking to cut costs, already rely on open-source models such as Qwen. While Friar presented numerous anecdotes of ChatGPT improving users’ lives, she conspicuously ignored that other AI products can achieve the same. Such a low-differentiation model cannot sustain OpenAI’s ambitions, particularly when 95% of users are non-paying.
Other commercial experiments raise further concerns. The early plugin App Store has been virtually abandoned, and the browser remains unfinished. Most users continue to treat ChatGPT primarily as a chatbot or a coding assistant. The notion of OpenAI entering e-commerce borders on implausible: were it to adopt a marketplace model akin to Taobao or Tmall, it would face an obvious difficulty: advertising revenue could distort recommendations, thereby undermining the system’s perceived reliability. Users depend on ChatGPT not only for shopping, but also for tasks that require accuracy and trust. These seemingly promising yet ill-suited ventures, including the recent relaxation of restrictions on adult content, reveal, in my view, more of OpenAI’s anxiety than its strategic coherence. The question remains: what is the true worth of a technology firm without a natural monopoly or a defensible business model?
A few other disclosures were noteworthy. OpenAI is developing a large language model for sign language. Friar also referred to the recently completed structural adjustment of the non-profit parent entity, the legal specifics were somewhat opaque to me, aimed at facilitating collaboration and financing across its ecosystem. This lends further credibility to rumours of a potential public listing.

Leave a comment