The Modern Dilemma

AI, Profitability, and Compute Costs

1. The Wild, Wild Compute Costs (Imagine Your Internet Browser as a Coffee Addict)

You know that friend who drinks four espressos just to wake up and then claims it's "just a regular day"? Yeah, that's kind of how AI models like ChatGPT operate. Except instead of coffee, they're chugging compute power. Massive, mind-boggling amounts of it.

Here's the catch: it's really expensive. Compute power isn’t free, and neither is electricity. Every query costs real money, and it's the kind of cost that starts small and becomes enormous at scale.

So why can't OpenAI just make it cheaper? Think of compute costs as an uninvited party guest who keeps eating all the hors d'oeuvres. You can buy cheaper hors d'oeuvres, but they still need to eat a ton of them to stay full. OpenAI's applied team is focusing on ways to make these models run more efficiently, but, spoiler alert: reducing compute costs is easier said than done.

Investors would likely be most interested in the financial impact of compute costs on scalability and sustainability, especially the increasing operational expenses tied to running models like ChatGPT. They’d also want insight into cost-saving efforts and any innovations that OpenAI (or competitors) might pursue to reduce these compute costs.

Non-linear Compute costs per query at scale

Table 1: Infrastructure Costs of Leading AI Companies

AI's Insatiable Compute Appetite: The Cost of Every Query

2. Profitability: Raising Prices or Getting Better at Making Coffee

When it comes to profitability, there are a couple of options: raise prices or figure out how to make that coffee cheaper. If every ChatGPT user is drinking their coffee at $20 a month, OpenAI can either up the subscription price to $40 or find a cheaper coffee supply. The thing is, they’re trying to figure out how to get to the break-even point, which might be around $50 per user. Imagine buying your monthly caffeine fix, and the barista whispers that they lose money every time you order.

A more vertically integrated company like Google or Microsoft could theoretically make this work better because they have a better handle on the coffee-making supply chain (i.e. compute power). But even they are stuck with limitations. It's like asking if Starbucks can grow its own coffee beans to save money—they can, but only so much land and sunlight are available.

Balancing AI Profitability: Higher Prices or Greater Efficiency?

3. Vertical Integration: When Your Grandma Owns the Whole Grocery Store

Okay, picture this: Microsoft and Google are like grandmas who own not just their kitchen but the entire grocery store. If they want flour, they don't have to buy it—they just walk to aisle 7. They control a lot of the components needed to make something like ChatGPT. They can bring down costs in a way OpenAI, which still has to "buy" compute, can't.

But here's the twist: just because Grandma has access to all the flour doesn’t mean she has unlimited amounts of it. Companies like Google and Microsoft still face compute constraints. If they decide to allocate their precious compute to power their AI, it means taking away resources from other projects (like Grandma making bread for the entire family instead of just a pie for Sunday dinner).

Investors would benefit from understanding the cost advantage of vertical integration for companies like Microsoft and Google versus OpenAI. This section could further explore how owning the infrastructure could lead to cost savings or limitations (e.g., resource allocation issues).

Infrastructure Cost Comparison between OpenAI, Google, and Microsoft

Vertical Integration: Grandma’s Grocery Store vs. OpenAI’s Supply Struggles

4. Google vs. ChatGPT: The Search Battle (And Why Google Doesn’t Always Play Nice)

Ever wonder why Google doesn’t just add generative AI to its main search results and call it a day? Well, here’s why: it would be like having an ATM in a bakery. Sure, it’s convenient, but the bakery makes a lot more money when people buy the bread instead of withdrawing cash. Google makes a lot of its money from ads, and generative search would put a dent in that revenue stream. They can technically do it, but they'd rather not mess with the bread-and-butter of their business (pun intended).

Plus, it’s about talent. Google has amazing talent, but there’s something about a small, hyper-focused team that can innovate faster—less bureaucracy, more action. Smaller teams like OpenAI’s can push forward in ways that large, bloated teams can’t (even if those bigger teams theoretically have all the right ingredients).

Investors would be interested in the impact of generative AI on Google's core business model, specifically how ads generate revenue vs. the potential (and risk) of shifting to generative search. Emphasizing competitive advantage and revenue implications here would be beneficial.

Potential Revenue Impact of Generative AI in Search

Table 2: Revenue Comparison Between Google and OpenAI

Google’s Bakery ATM Dilemma: Ads vs. AI Search

5. Compute is Universal, But Not Infinite

Here’s a myth-buster: even Google, Microsoft, and Meta don’t have unlimited compute. Imagine a game of musical chairs where every company wants a seat, and the chairs are GPUs. Big companies can buy more chairs, but there are only so many to go around. Every time someone builds a new chair (i.e., a new GPU unit), there’s already a line of people waiting to sit on it.

So yes, OpenAI might seem to struggle with compute more because they have to partner up for hardware, but the same is true even for the big guys. No one has infinite chairs. If they decide to allocate more resources to one AI, it means pulling back elsewhere.

Investors would be keen on understanding the competitive constraints in the GPU market. An overview of how GPU shortages impact the growth rate of AI companies, and how companies might mitigate this limitation, would enhance the value of this section.

Global GPU demand vs. Supply Constraints in AI hardware

Google’s Bakery ATM Dilemma: Ads vs. AI Search

6. Making AI Work—One Coffee, Chair, and Bakery at a Time

AI is brilliant, transformative, and, most importantly, expensive. Every coffee it drinks, every GPU chair it tries to grab, and every generative search term Google avoids represents the complex dance between innovation and sustainability. Profitability for AI is about balancing all these pieces in a way that makes sense—not just for today, but for a future where compute costs less, talent density drives breakthroughs, and Grandma’s grocery store has enough flour for everyone.

The AI equation involves trade-offs between revenue, operational effeciency and compute cost

A balancing act needed to make AI viable in the long term.