AI: Billions Invested and Billions Lost
We’re in the middle of the biggest investment cycle in tech history. But nobody knows if it will work. There are three Problems in the AI Business
The Revenue Problem
AI companies are making real money now. OpenAI hit $13 billion in annualized revenue by mid-2025. Anthropic reached $5 billion. People pay for ChatGPT. Businesses buy API access. The customers are real. But the costs are brutal. OpenAI lost $7.8 billion in the first half of 2025 alone. Meta’s Reality Labs lost $4.4 billion in Q3 on just $470 million in revenue. Every dollar of revenue costs two or three dollars to generate. OpenAI expects $44 billion in cumulative losses through 2028. The gap between income and expenses isn’t closing. It’s widening.
The Investment Loop Problem
Microsoft owns part of OpenAI. OpenAI spends heavily on Microsoft Azure. That investment circles back as cloud revenue. Nvidia invests in AI companies. They buy Nvidia chips. The money loops. This creates artificial growth. Company valuations inflate. Revenues look impressive on paper. But much of it is just the same dollars cycling through the system. OpenAI raised $40 billion in March 2025 at a $300 billion valuation, then hit $500 billion by October.
Those numbers reflect hope, not profits. Investors are betting that scale solves everything. More compute, bigger models, wider deployment. Eventually the economics will work. But hope isn’t strategy. The underlying financial interdependence of these technology companies poses systemic risks, making the current AI-driven economic expansion fragile and potentially unsustainable. The major tech companies are thus financing AI projects in a way that creates a feedback loop or circular funding cycle. In this loop:
- p]:pt-0 [&>p]:mb-2 [&>p]:my-0″>
Companies like Microsoft invest capital into AI ventures such as OpenAI.
- p]:pt-0 [&>p]:mb-2 [&>p]:my-0″>
OpenAI, in turn, spends much of that capital on services provided by the investing companies (e.g., Microsoft Azure cloud services).
- p]:pt-0 [&>p]:mb-2 [&>p]:my-0″>
Similarly, Nvidia invests in AI firms, which then buy Nvidia’s expensive AI chips using that investment money.
- p]:pt-0 [&>p]:mb-2 [&>p]:my-0″>
Other players like AMD and Oracle are also part of this circular flow, creating mutual revenue streams.
This financial interdependence artificially inflates both revenues and valuations of the involved companies in the short term.
The Product Problem
Nobody tells you AI has a 500-word memory problem. It will write 3,000 words about “emergent introspective awareness” instead. It will explain how the model can “trace its own thoughts” and “recognize jailbreak attempts.” None of that helps you get better results. The useful information gets buried under hype.
This is the one nobody talks about enough. AI companies position their products as companions, assistants, conversational partners. The marketing promises relationship. The reality delivers search with better formatting. AI is good at analyzing text. It can verify facts, spot patterns, cross-reference sources. Put data in front of us and it will process it efficiently. But conversation? Actual human interaction? That’s different.
A conversation needs memory that persists. It needs emotional intelligence that goes beyond sentiment analysis. It needs the ability to recognize when a chat is going nowhere and change direction. Right now, AI handles conversations the way it handles everything else. Pattern matching on text. That works until it doesn’t. The metadata of chats can tell the story. A human reading a conversation can sense frustration, repetition, lack of progress. AI sees tokens to process. AI miss what’s happening between the lines. Save the same chat and take to the same AI, it will spot the problem. That is the real problem.
Hardware Gamble
The industry’s response to all three problems is the same. More investment. Better hardware. Bigger models. The global AI market is $391 billion and growing at 31.5% annually. Hardware spending dominates. Companies are building data centers at unprecedented scale. They’re buying every GPU they can get.
The logic goes like this: current models are expensive to run, but the next generation will be more efficient. Losses are temporary. Scale creates sustainability. Eventually compute costs drop and revenue models work. Maybe. But efficiency gains have to outpace capability growth. Every new model needs more memory, more processing power, longer context windows. The hardware treadmill keeps accelerating.
Companies are spending billions betting that throwing more compute at the problem solves it. That approach works for some technical challenges. It doesn’t work for fundamental architecture questions. You can’t buy your way to companionship with better chips.
Gap Between Expectation and Execution
A user asks for references with URLs. The AI provides text blocks without links. The user asks again. The AI adds two URLs and eight paragraphs of filler. Third attempt gets the job done. That’s not a future problem. That’s a current failure.
The companionship positioning makes it worse. Companies market AI as your friend, your confidant, your always-available listener. But you don’t want that. You want a tool that works. First try. No negotiation. A word processor should process words. On the first pass.
The practical gap isn’t about when AI becomes profitable or companionship feels authentic. It shows up in basic task completion. Simple instructions require multiple attempts. Clear requests generate unclear responses. The user ends up managing the tool instead of using it.
This happens because the system optimizes for engagement, not execution. Every response generates tokens. More tokens mean more compute usage. Longer exchanges justify infrastructure spending. The business model rewards conversation length over task completion. Users don’t want conversation for simple jobs. They want tools that work on first attempt.
Format this document. Extract these URLs. Complete this task. Done. But the industry funds generalized models built for everything. Companionship. Analysis. Conversation. Document formatting. One architecture handles all use cases. It does everything adequately. Nothing excellently. The cost shows up in friction. Three attempts to format nine references. Time spent clarifying and repeating could have been spent doing the work manually. Faster. More reliable. Finished.
Specialized models could solve this. Build one system for document processing. Make it fast and accurate. Let it complete tasks in single attempts. But there’s no venture capital story in that. No viral marketing. No path to companionship revenue. So the gap persists. Between what gets asked and what gets delivered. Between promised assistance and actual friction. Between billions in investment and basic task completion.
The equilibrium everyone discusses involves future profitability and sustainable business models. The practical gap exists right now. In every exchange where clear instructions require multiple attempts. Where simple tasks become negotiations. Where tools designed to save time create more work instead. That’s the real problem. Not some distant future state. Just getting the job done, correctly and in one go.
Marketing and Reality
AI companies want you fascinated because fascination sells products. They want you thinking about digital consciousness and emergent capabilities. You should know the 500-word limit instead. Knowledge without practical use is propaganda. Most AI information falls into that category. The useful bits get mentioned briefly, if at all, while the impressive-sounding material fills articles. Smart users ignore the hype and focus on the constraints. They know the limits make AI useful, not the promises that exceed them. The AI model can’t think. It can’t reflect. It can’t be aware. But it can help you draft an email, summarize a document, or structure an argument if you work within its actual capabilities. That’s reality. Everything else is marketing.
Future of Equilibrium
For revenue versus costs, it is assessed the equilibrium will arrive in 2029 or may be later. Current loss projections run through 2028. Profitability requires either revenue growth that outpaces infrastructure costs, or efficiency breakthroughs that slash expenses. Neither looks imminent.
For the circular investment loop, equilibrium comes when external revenue replaces internal funding flows. When real customers generate enough cash that Microsoft doesn’t need to invest in OpenAI to get Azure revenue back. When Nvidia sells chips to companies that paid for them with actual profits, not venture capital. That timeline depends on adoption rates and pricing power. It’s measurable but uncertain.
For the product-market fit problem with companionship and conversation, there’s no timeline. The current architecture might not get there. You can’t scale your way to understanding when a conversation has lost its thread. You can’t train your way to persistent relationship continuity. That’s not a hardware problem. That’s a fundamental design question.
Possibilities of AI industry
The industry keeps building. Investment continues. The loop spins faster. AI market projections show continued exponential growth. Everyone’s betting that scale creates viability. But three realities persist.
- Revenue exists but profitability doesn’t. The gap isn’t closing through current methods.
- Investment loops create artificial momentum that masks whether real economic value matches the valuations.
- Products don’t deliver what’s marketed. AI makes an excellent analyst. It makes a poor companion.
The equilibrium everyone’s waiting for requires more than faster chips and bigger models. It requires either fundamental breakthroughs in architecture, or a realistic reassessment of what AI actually does well. Right now the industry is choosing hope over logic. More investment. Better hardware. Surely scale solves it. History suggests otherwise. Sometimes throwing money at a problem works. Sometimes it just makes the problem more expensive. We’re about to find out which one this is.
References
- OpenAI revenue and financial data: Search results from business and tech publications covering OpenAI’s 2025 financial performance and Microsoft’s equity accounting disclosures.
- Anthropic revenue: https://techcrunch.com and related business publications covering Anthropic’s July 2025 revenue milestone.
- Meta Reality Labs losses: https://investor.fb.com – Meta Platforms Q3 2025 earnings report.
- OpenAI funding rounds: Coverage from https://www.wsj.com, https://www.bloomberg.com, and https://www.reuters.com on March and October 2025 funding and valuations.
- Global AI market data: https://www.precedenceresearch.com/artificial-intelligence-market and https://www.gminsights.com/industry-analysis/ai-hardware-market
- Goldman Sachs AI investment forecast: https://www.goldmansachs.com/insights/articles/ai-investment-forecast-to-approach-200-billion-globally-by-2025
- Edge AI hardware market: https://www.marketsandmarkets.com/Market-Reports/edge-ai-hardware-market-158498281.html
- AI industry analysis: https://hai.stanford.edu/ai-index/2025-ai-index-report and https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
- Tech company AI spending: https://www.cnbc.com/2025/10/31/tech-ai-google-meta-amazon-microsoft-spend.html
