Fears Grow of AI Bubble and Potential Pressure Points That Could Burst It
There is ongoing debate about the possibility of an AI bubble, with three main pressure points identified that could trigger its burst.
Despite some declines in AI stock prices since mid-2025, the US stock market remains heavily influenced by AI-related companies. Forty-one AI stocks generate about 75% of the returns in the S&P 500, while seven major tech companies—Nvidia, Microsoft, Amazon, Google, Meta, Apple, and Tesla—account for approximately 37% of the S&P 500's performance.
Nvidia's CEO Jensen Huang has stated that the industry is far from being in a bubble. However, critics warn that a burst could have systemic repercussions, including illiquid banks, bailouts, and costs to taxpayers.
Significant AI investment continues, with major players planning around $1 trillion in AI spending by 2026. Companies like Microsoft, Amazon, Google, Meta, and Oracle are leading this effort, while OpenAI intends to invest $1.4 trillion over three years, though its profitability is projected to trail with an estimated $20 billion in 2025.
The construction of AI data centers and the resulting power demands are putting strain on electrical grids. Notable projects such as Stargate in Texas and Meta’s Hyperion in Louisiana are indicative of this growth.
Analysts caution that depreciation of AI hardware could significantly erode value. Estimates suggest a potential loss of $780 billion with three-year chip depreciation, which could rise to $1.6 trillion with two-year depreciation. It is suggested that around $2 trillion in profits by 2030 would be necessary to justify these costs.
Adoption of AI technologies is increasing, although monetization remains a challenge. OpenAI reports about 800 million weekly active users with approximately 5% paying users. Enterprise adoption was between 8% and 12% in early 2025, rose to 14% in June, and recently settled around 12%. Most firms remain in pilot phases or are just beginning to scale AI solutions, according to McKinsey.
While large language models (LLMs) scale with compute power, they often lack real-world understanding and long-term memory. Experts caution that a 100x increase in scaling may not necessarily transform outcomes as hoped, highlighting skepticism about the scaling hypothesis within the industry.