Anyone waiting for a single date when “the AI bubble pops” is probably using the wrong model. A classic bubble bursts in one dramatic move when faith disappears faster than cash can arrive. AI in 2026 looks messier than that. There is obvious excess in private valuations, venture concentration, and infrastructure spending. There is also unmistakably real demand: OpenAI says it is generating about $2 billion a month in revenue and serving 900 million weekly users; Alphabet says Gemini and Google Cloud are driving growth across its business; Amazon, Meta, Microsoft, Nvidia, and TSMC are all posting numbers that would have looked absurdly optimistic two years ago.
Table of Contents
The cleaner answer is that the AI bubble is more likely to deflate in stages than explode on one afternoon. My read, based on current spending, adoption, and power constraints, is that the first serious repricing pressure is already visible in 2026, and the harder test arrives across 2027 and 2028, when investors will want proof that massive capex is turning into durable cash flow rather than clever demos, token growth, and heroic future assumptions. That does not mean AI demand disappears. It means parts of the market stop getting priced as if every layer will mint monopoly margins.
The real question sits under the word bubble
The first mistake is talking about “AI” as though it were a single tradable object. It is not. There are at least four different AI markets moving on different clocks: chips and foundries, cloud infrastructure, foundation model labs, and the application layer built on top. Each has its own economics, its own bottlenecks, and its own route to disappointment. Nvidia can keep growing even if dozens of AI app startups die. An enterprise software company can struggle to monetize copilots even while OpenAI or Anthropic keeps gaining share. A cloud provider can make money from the rush even if some customers are wildly overbuilding.
That distinction matters because people often ask the bubble question as if they mean one of three very different things. They might mean public market valuations, which can reprice quickly. They might mean private startup valuations, which can remain inflated longer because they are marked less often. Or they might mean real economic usefulness, which is slower, less theatrical, and much harder to read from daily headlines. Those three lines do not move together. Dot-com stocks crashed long before the internet stopped mattering. Railroads were overbuilt and overfinanced long before rail became indispensable. AI can follow the same pattern: a capital cycle can break while the technology keeps spreading.
That is why the word bubble needs discipline. If you mean “is AI overhyped,” the answer is yes in obvious pockets. If you mean “is AI economically fake,” the answer is no. The Stanford AI Index says generative AI reached 53% population-level adoption within three years, and it estimates annual value to U.S. consumers at $172 billion by early 2026. The Federal Reserve says about 18% of U.S. firms had adopted AI by year-end 2025. Those figures do not describe a mirage. They describe a technology that is spreading fast, but unevenly, through consumers and businesses.
The better way to ask the question is this: which parts of the AI stack are priced for perfection, and what happens when the money gets less patient? That version is useful because it shifts attention away from slogans and toward cash conversion, utilization, and timing. The bubble will burst first wherever revenue quality is weakest, power is hardest to secure, customers are most concentrated, and accounting relies most heavily on run-rate storytelling.
Demand is no longer hypothetical
Start with the part that bubble skeptics often understate. Real demand exists already, and in some cases it is huge. OpenAI’s own disclosures are the bluntest example: it says it now generates roughly $2 billion in monthly revenue, that enterprise already makes up more than 40% of revenue, that Codex has 3 million weekly active users, and that its APIs process more than 15 billion tokens per minute. Even allowing for company optimism, those are not vanity numbers from a lab with no market. That is the profile of a platform business with genuine usage and paying customers.
Google’s numbers tell a similar story from a different angle. Alphabet says Gemini now processes more than 10 billion tokens per minute through direct API usage, the Gemini app has more than 750 million monthly active users, more than 120,000 enterprises use Gemini, and revenue from products built on its generative AI models grew nearly 400% year over year in the fourth quarter of 2025. It also says it has sold more than 8 million paid Gemini Enterprise seats. That is not the profile of a market waiting to discover whether anyone wants the product. It is the profile of a market wrestling with how fast usage can be monetized and how much infrastructure it needs to sustain that pace.
Amazon’s disclosures are less flashy, but they matter because they show demand showing up in infrastructure and tooling. AWS grew to $128.7 billion in sales in 2025, Amazon says Bedrock is used by more than 100,000 companies, and its Trainium and Graviton chips reached a combined annual revenue run rate above $10 billion. Amazon is not behaving like a company funding a science project for fun. It is behaving like a company that sees AI demand large enough to justify making silicon, clusters, and modernization tools part of its core cloud strategy.
The same pattern appears lower in the stack. Nvidia reported $62.3 billion in quarterly data center revenue for fiscal Q4 2026, up 75% from a year earlier. TSMC said revenue from AI accelerators accounted for a high-teens share of its total revenue in 2025, and that it expects 2026 revenue growth near 30% in U.S. dollar terms. ASML’s 2025 report, as described by Reuters, shifted from cautious language to saying the AI boom had become the main driver of demand across a broader customer base. A fake demand cycle does not usually show up this clearly across chips, foundries, lithography, cloud, and enterprise software at the same time.
There is also measured evidence outside company marketing. The Fed’s review shows AI adoption among firms moving higher. Stanford’s 2026 index shows consumer adoption spreading faster than the PC or the internet did. PwC’s 2026 study says nearly three-quarters of AI’s economic value is being captured by just 20% of companies. That last point matters most. The money is real, but it is not evenly distributed. The leaders are pulling away; the rest are still experimenting. That is one of the clearest signs of an early platform shift: broad attention, narrow capture.
Spending is running ahead of proof
Now the uncomfortable side. The spending curve is still ahead of the proof curve. Goldman Sachs argued in 2024 that generative AI could drive about $1 trillion in capex over the coming years while having relatively little to show for it at that point. Two years later, that argument has not disappeared. If anything, it has become more concrete. Allianz now describes AI capex as the defining market debate of 2026 and says U.S. Big Tech investment, after rising about 60% in 2025, is expected to climb another 50% and exceed $600 billion.
The company disclosures behind that anxiety are startling. Alphabet guided to $175 billion to $185 billion of 2026 capex. Amazon said it expects about $200 billion of capital expenditures in 2026. Meta guided to $115 billion to $135 billion. TSMC said its 2026 capital budget will be $52 billion to $56 billion. Microsoft, without giving a comparable full-year guide on the page we opened, still reported $49.3 billion in property and equipment additions in just the first six months of FY2026. Even before you start adding in everyone else, this is a spending boom on a scale that demands near-perfect execution.
That is the core tension in the market. Revenue is growing. Usage is growing. The issue is whether those gains are large enough, soon enough, to justify the financing burden and margin pressure being created right now. Microsoft’s cloud business is still expanding fast, but it also says gross margin percentage fell slightly because of continued AI infrastructure investment and growing AI product usage. Meta’s 2025 capex was already $72.2 billion, and its 2026 guide steps up sharply again. Alphabet’s own language is unusually direct: it says AI investments and infrastructure are driving revenue, yet it still sees the need to spend up to $185 billion this year. Strong demand is not the same thing as easy payback.
That gap between demand and payback is where bubbles usually hide. Not in the first-order fact that customers exist, but in the second-order assumption that every dollar of capacity will be filled at rich economics for long enough to justify its cost of capital. The more capital-intensive the cycle becomes, the less forgiving the market gets. Software investors can tolerate long product gestation when marginal costs are low. They get much stricter when the story starts to look like utilities, fabs, power procurement, and balance-sheet engineering.
A lot of today’s excitement is built on an implicit promise: spend now, dominate later. That can work. It did for cloud. It did for smartphones. It also fails regularly, because the later part arrives slower than models assume. The AI bubble, where it exists, is mostly a bubble in timing and margin expectations, not in the existence of the technology itself.
Four AI markets are moving at different speeds
The market becomes easier to read once you stop treating it as one trade.
Where the pressure is most likely to show
| Layer | What keeps it standing | What could crack first |
|---|---|---|
| Chips and semiconductor equipment | Real demand, tight supply, visible revenue | Cyclicality returning faster than expected |
| Cloud and data center infrastructure | Capacity scarcity, hyperscaler cash flow | Power shortages, utilization misses, lower pricing |
| Foundation labs | Fast revenue growth, brand strength, enterprise pull | Valuation inflation, run-rate confusion, compute costs |
| App layer and AI wrappers | Low barriers, fast experimentation | Weak differentiation, price compression, churn |
The table is a map of vulnerability, not a forecast of doom. The lower you go in defensibility and the higher you go in capital intensity, the more fragile the narrative becomes. Chips have brutal cycles, but Nvidia, TSMC, and ASML are tied to very real bottlenecks. Cloud infrastructure can stay strong longer, though it lives or dies on utilization and power. Foundation labs can look unstoppable until investors start asking harder questions about revenue quality. The app layer has the easiest time launching and the hardest time defending margins.
The venture market is already showing this separation through concentration. Crunchbase says global venture investment hit $300 billion in Q1 2026, with $242 billion going to AI companies and four mega-rounds accounting for 65% of global venture investment in the quarter. OpenAI alone raised $122 billion at an $852 billion valuation, and Anthropic raised $30 billion at a $380 billion valuation. When capital starts crowding into a handful of giants while the long tail fights for oxygen, you are no longer looking at a broad-based boom. You are looking at a market choosing its likely survivors early.
That concentration cuts both ways. It supports the leaders by giving them more runway, more compute access, and more distribution. It also makes the whole cycle more brittle. If a few large names carry too much of the valuation story, any stumble by one of them can hit sentiment far beyond its own business. That is especially true in private markets, where round sizes and marks can create an illusion of certainty right up until the next financing happens at a different price or on tougher terms.
So the right mental model is not “AI bubble” in the singular. It is a stack with some near-monopolistic chokepoints, some expensive infrastructure bets, some genuine platform businesses, and a very large number of fragile passengers riding on top. The passengers usually go first. The platform winners usually wobble, reprice, and keep building.
The power bottleneck is where excess gets exposed
If you want to know where the first hard limit sits, look at electricity. Power is where AI hype stops being poetry and turns into permits, transformers, turbines, cooling systems, and grid queues. The IEA says data-center electricity demand rose 17% in 2025, with AI-focused data centers climbing even faster. It also projects electricity generation for data centers growing from 460 TWh in 2024 to more than 1,000 TWh in 2030 in its base case. That is not a small engineering challenge. It is the kind of physical bottleneck that can delay projects, distort returns, and punish optimistic timelines.
Uptime Institute lands on the same pressure point from the operator side. It says AI workload growth will remain concentrated among organizations that can support high-density deployments, and that power shortages will intensify pressure on already constrained grids. It also says large-model infrastructure is concentrating in a smaller number of large organizations. That is a polite way of saying this boom is becoming physically harder to enter and more expensive to scale. In a capital cycle, those are classic ingredients for later disappointment.
This matters because markets love to extrapolate digital adoption and forget physical build-out times. Software can go viral in weeks. Power infrastructure cannot. A company can announce a data center, sign a chip contract, and talk about future capacity long before electricity is actually flowing at the needed level. That time gap is fertile ground for overvaluation because capital gets committed before utilization is proven. If demand stays hot, the bottleneck supports incumbents. If demand cools even modestly, the projects announced at peak excitement can suddenly look overbuilt or badly timed.
You can already see how the market is adjusting its language. The story has shifted from “there is infinite demand for compute” to “there is enormous demand, but delivery is constrained by power, grid interconnection, and build complexity.” That is a healthier story because it is more honest. It is also less friendly to the highest valuations, because honest infrastructure stories tend to trade on returns, not mystique.
My suspicion is that the first visible AI-bubble crack will not be a consumer revolt or a sudden collapse in model quality. It will be some mix of delayed facilities, lower-than-assumed utilization, and tougher scrutiny of who pays for the power and when. Markets can absorb a lot of hype. They get much less forgiving when the bottleneck is a substation.
Enterprise rollout will decide who deserves their valuation
The consumer story is spectacularly visible, but the durable economics of AI will be decided in enterprise deployments. That is where the market’s patience will either be rewarded or punished. McKinsey’s 2025 global survey says the transition from pilots to scaled impact is still a work in progress at most organizations. PwC’s 2026 study sharpens the point by saying 74% of AI’s economic value is being captured by just 20% of organizations. That is the middle chapter of a platform shift, not the end of one. Leaders are converting usage into business redesign; laggards are still sprinkling tools over unchanged workflows.
That divide explains why the bubble debate gets so noisy. One executive sees real gains and wants more budget. Another sees a scattered set of pilots and concludes the whole thing is overhyped. Both can be telling the truth. The difference is often not the model. It is workflow design, data readiness, governance, and whether the company is serious enough to rewire the business instead of bolting chat windows onto old processes. McKinsey says high performers are much more likely to have leadership commitment, human validation processes, and the broader operating model required to capture value. PwC says the leaders are pointing AI toward growth and business-model reinvention, not only cost cutting.
Google’s enterprise disclosures suggest that some of this is moving faster than many skeptics think. It says more than 120,000 enterprises use Gemini, more than 8 million paid Gemini Enterprise seats have been sold, and enterprise AI agents are driving larger commitments. OpenAI says enterprise is already more than 40% of revenue and on track to reach parity with consumer by the end of 2026. Those are strong signs that the center of gravity is shifting away from novelty and toward operational adoption.
Still, the harder test is not whether companies sign contracts. It is whether AI spending turns into durable margin expansion or credible new revenue streams. That takes longer than the market likes. It also creates room for brutal repricing among vendors whose products are easy to imitate or whose differentiation is mostly distribution. Enterprise buyers will gradually separate the companies selling indispensable workflow change from the companies selling expensive autocomplete. That sort of sorting process is exactly what a bubble deflation looks like in software. Not a clean crash. A much harsher ranking.
Productivity gains are real, patchy, and stubbornly human
One reason the AI bubble has not burst already is simple: there are genuine productivity gains. The trouble is that they are uneven, task-specific, and often smaller in the aggregate than the loudest believers suggest. NBER’s well-known customer-support study found a 14% productivity gain on average, with much larger benefits for novice and lower-skilled workers. The BIS, using European firm data, finds AI adoption raising labor productivity by 4% on average in the short run, with gains concentrated in medium and large firms and supported by complementary investment in software, data, and training.
Those are meaningful numbers. They are also a long way from the lazy fantasy that every white-collar workflow is about to be automated cleanly. OECD’s review of experimental research says generative AI can improve efficiency, innovation, and entrepreneurship, but it also stresses trust, human expertise, and organizational adaptation. The ECB’s summary of the literature makes the spread explicit: pessimistic estimates see only marginal aggregate TFP gains over a decade, while more optimistic work sees AI adding between 0.4 and 1.3 percentage points to annual labor productivity growth in high-exposure, high-adoption economies. That range is a reminder that the economic upside is not settled law. It depends on adoption quality, not just model capability.
There is also evidence pushing against the fantasy of universal speedups. METR found that experienced open-source developers working on their own repositories took 19% longer when using early-2025 AI tools. MIT Sloan reported that GitHub Copilot changed how developers spent time, shifting them toward more core coding and less project management, with junior developers seeing the biggest benefits. Those results do not cancel the bullish case. They do destroy the cartoon version of it. AI looks more like a jagged productivity tool than a magic wand.
That matters for bubble timing because the market is still pricing many businesses as though performance gains will be broad, immediate, and margin-rich. Real adoption is much more selective. Customer support, coding assistance for some developers, search, marketing, document workflows, and enterprise knowledge tasks are already moving. Plenty of harder work remains brittle, supervisory, or costly to verify. If AI underdelivers anywhere, it will underdeliver first in the leap from narrow task improvement to company-wide economic transformation.
The next reset probably arrives as a deflation, not a detonation
That brings us to timing. The most likely path is a rolling deflation, not a Hollywood crash. Public markets can reprice quickly when capex guidance gets too aggressive or margins wobble. Private valuations can stay lofty longer, especially for the top labs, because they have capital, distribution, and strategic value. The real shakeout will probably appear first in weaker software names, second-tier infrastructure plays, and startups whose only asset is wrapping someone else’s model with a prettier interface.
Foundation labs are a special case. OpenAI and Anthropic clearly have real revenue momentum. Reuters reported OpenAI above $25 billion in annualized revenue by the end of February 2026, while another Reuters report said early-2026 annualized revenue figures had been around $20 billion for OpenAI and $9 billion for Anthropic. Yet Reuters also pointed out that Anthropic’s run-rate figures and cumulative recognized revenue tell very different stories, which is a warning sign for investors who treat all growth metrics as interchangeable. When money gets tighter, the market becomes much fussier about the difference between annualized snapshots and durable, recognized revenue.
Infrastructure could remain buoyant longer because it sits closer to real scarcity. Nvidia, TSMC, and the cloud providers are selling into a genuine build-out. But even that layer is not immune. If model efficiency improves faster than expected, or if customers learn to use inference more intelligently, demand growth can decelerate before capacity plans do. Google already says it lowered Gemini serving unit costs by 78% over 2025 through optimization and utilization improvements. Efficiency is great for users. It is less great for simplistic “infinite compute forever” assumptions.
That is why I would not forecast a single “burst” date. I would say this instead: 2026 is the year the market starts discriminating harder, 2027 is the year payback pressure gets serious, and 2028 is the year we may know which parts of the stack were priced like platforms but were really just passengers. That is an inference, not a published forecast, but it follows directly from the collision between front-loaded capex, still-uneven enterprise rollout, power bottlenecks, and valuation concentration.
The years most likely to matter
A few triggers are worth watching because they turn abstract debate into hard evidence.
First, watch capex discipline. If Alphabet, Amazon, Meta, Microsoft, and TSMC keep lifting spending while the market stops rewarding them for it, sentiment can turn quickly. Meta’s 2026 guide and Amazon’s $200 billion capex plan show how aggressive the build remains. Investors will tolerate that as long as revenue and margins hold. They will not tolerate it forever on promise alone.
Second, watch power delivery and data-center utilization. The IEA and Uptime are both telling the market that electricity and high-density deployment are becoming central constraints. If big projects slip, interconnection lags, or facilities come online later than planned, the repricing will hit infrastructure narratives first.
Third, watch enterprise concentration of gains. If PwC’s divide between leaders and laggards keeps widening while the broader corporate world stays stuck in pilot mode, the market will punish broad-based AI optimism and reward only the vendors with proven embedment in the winning companies.
Fourth, watch revenue quality at the foundation labs. OpenAI and Anthropic may both keep growing rapidly. The question is whether that growth is recurring, diversified, and high quality enough to justify private-market marks that are already historic. The more investors lean on run-rate headlines and compute narratives, the more violent the reaction can be when the numbers are restated in plainer accounting terms.
Fifth, watch the app-layer casualty count. The AI bubble that most people picture may end up being mostly a software and startup story: too many thin wrappers, too little pricing power, and too little defensibility against the model makers and cloud platforms underneath them. That is where shakeouts usually happen fastest, because customer switching is easy and capital is least patient.
After the froth comes the buildout
My answer, then, is not “the bubble bursts next quarter” and not “there is no bubble.” The closer answer is that AI contains a real technology wave wrapped in a very real capital-market excess. The excess is most visible in concentration, timing assumptions, and willingness to finance enormous infrastructure before the returns are fully legible. The technology is most visible in revenue, adoption, enterprise pull, and measurable productivity gains in specific tasks. Both things are true at once.
That is why the end of this phase will probably look familiar. A lot of companies will disappoint. Some valuations will prove unserious. Some data-center projects will arrive later or earn less than promised. Some app categories will get crushed by price compression. A few major platforms will emerge stronger because the shakeout clears the field around them. Bubbles do not disprove the underlying technology. They expose who mistook access to a trend for a durable business.
So when does the AI bubble burst? It is already beginning to lose its innocence. The market is moving from blanket enthusiasm to harder ranking. The loud phase of the boom ran on possibility. The next phase runs on utilization, margins, power, and repeatable enterprise value. That is usually the point where the froth comes off, the survivors keep building, and the technology settles into the economy for good.
FAQ
Probably not. The evidence points to a staged deflation across different layers of the AI market, not one synchronized crash. Public equities can reprice fast, private valuations can lag, and actual adoption can keep rising while both happen.
It is both. The technology shift is real, while parts of the financing and valuation story are overheated. That combination is common in major platform transitions.
The weakest layer is usually the application layer with low differentiation, especially companies that rely on someone else’s model and have little pricing power or switching cost.
The strongest areas are the scarce chokepoints: advanced chips, foundry capacity, cloud infrastructure with real customers, and a small number of foundation model platforms with strong distribution.
Because AI infrastructure is no longer just software. Electricity, cooling, grid access, and data-center build timelines are now major constraints, and those constraints can break unrealistic financial assumptions.
Yes. That is one of the most likely outcomes. Valuations can fall sharply even while usage, revenue, and enterprise adoption continue to rise.
Because the upside for the winners may be enormous. Investors are trying to own the platforms that become default infrastructure for work, software, search, and enterprise automation.
No. Real revenue reduces one kind of risk, but it does not eliminate valuation risk, margin risk, or timing risk. A company can be real and still be overpriced.
Usually yes. The biggest cloud and platform companies have cash flow, existing customer bases, and multiple ways to monetize AI, which gives them more room to absorb missteps.
A mix of capex fatigue, delayed data-center delivery, weaker utilization, and tougher investor questions about returns would be the clearest early warning cluster.
Because the durable economics of AI will be decided there. Consumer excitement is powerful, but enterprise deployments determine whether AI becomes embedded in budgets and workflows for years.
Yes, though they are patchy. Some studies show meaningful gains in customer support and selected coding tasks, while others show little benefit or even slower performance in expert workflows.
Yes. Better efficiency lowers cost for users, which is good for adoption, but it can also reduce the amount of compute needed per task, which may challenge the most aggressive infrastructure assumptions.
No. A big technology wave rarely supports every company born during the hype phase. Some will be acquired, some will shrink, and some will disappear.
Yes, for a while. Private marks tend to move more slowly, especially for elite companies with strategic investors and lots of cash. That delay can make the eventual adjustment harsher.
The most important window looks like 2026 through 2028, when current spending plans collide with the need to show durable returns and practical enterprise value.
No. It usually means capital was misallocated or expectations were too rich. The internet survived the dot-com crash. AI can survive an AI crash.
The AI bubble will probably burst in layers, starting with overvalued software and overconfident infrastructure assumptions, while the core technology keeps spreading.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

This article is an original analysis supported by the sources cited below
The 2026 AI Index Report
Stanford HAI’s annual review of adoption, consumer value, and broader AI trends.
FY26 Q2 press release
Microsoft’s official earnings release with cloud growth and AI-related investment details.
Alphabet Announces Fourth Quarter and Fiscal Year 2025 Results
Alphabet’s official earnings release covering Gemini usage, Google Cloud scale, and 2026 capex guidance.
Alphabet earnings, Q4 2025: CEO’s remarks
Google’s detailed operating commentary on Gemini monetization, enterprise adoption, and infrastructure efficiency.
Amazon.com Announces Fourth Quarter Results
Amazon’s official results with AWS growth, Trainium traction, and 2026 capital spending plans.
Meta Reports Fourth Quarter and Full Year 2025 Results
Meta’s official release on revenue growth, capex, and its stepped-up 2026 infrastructure plans.
NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2026
Nvidia’s official statement on data-center revenue and continuing demand for AI compute.
TSMC 2025 Q4 quarterly results
TSMC’s investor materials hub for its Q4 2025 results and conference transcript.
Q4 2025 Taiwan Semiconductor Manufacturing Co Ltd Earnings Call
TSMC’s earnings-call transcript with commentary on AI accelerator revenue and 2026 capex.
2025 Annual Report
ASML’s annual-report portal with company financials and industry context.
ASML sees AI demand as long-term growth driver in 2025 annual report
Reuters coverage highlighting ASML’s shift toward a more bullish reading of AI-led semiconductor demand.
The next phase of enterprise AI
OpenAI’s enterprise update with revenue mix, API activity, and customer adoption signals.
OpenAI raises $122 billion to accelerate the next phase of AI
OpenAI’s funding announcement with valuation, usage, and revenue disclosures.
Anthropic raises $30 billion in Series G funding at $380 billion post-money valuation
Anthropic’s funding announcement with valuation and run-rate revenue figures.
OpenAI tops $25 billion in annualized revenue, The Information reports
Reuters reporting on OpenAI’s rapid revenue growth and enterprise push.
Anthropic may have closed the revenue gap on OpenAI. Here’s what it means for their IPOs
Reuters analysis of annualized revenue figures and the changing competitive balance between leading model labs.
Anthropic gives lesson in AI revenue hallucination
Reuters Breakingviews piece on the difference between run-rate claims and recognized revenue.
Amazon to invest up to $25 billion in Anthropic as part of $100 billion cloud deal
Reuters report on cloud commitments, custom silicon, and the scale of current infrastructure bets.
CoreWeave Reports Strong Fourth Quarter and Fiscal Year 2025 Results
CoreWeave’s filed earnings release showing revenue growth and backlog expansion in AI cloud infrastructure.
Gen AI: too much spend, too little benefit?
Goldman Sachs research note framing the capex-versus-payback debate around generative AI.
AI capex cycle: war-proof for now
Allianz Research note on AI capex intensity, investor concerns, and the economics of the buildout.
Data centre electricity use surged in 2025, even with tightening bottlenecks driving a scramble for solutions
IEA news analysis on fast-rising data-center electricity demand.
Energy supply for AI
IEA analysis of future electricity requirements tied to data-center and AI growth.
Five Data Center Predictions for 2026
Uptime Institute’s outlook on concentrated AI workloads, high-density infrastructure, and grid strain.
Uptime Institute Announces Five Data Center Predictions Report for 2026
Uptime’s summary of the report’s main findings on power shortages and infrastructure concentration.
Monitoring AI Adoption in the U.S. Economy
Federal Reserve note compiling survey-based evidence on U.S. business adoption of AI.
The State of AI: Global Survey 2025
McKinsey’s survey on AI deployment, scaling, and the gap between pilots and realized value.
The effects of generative AI on productivity, innovation and entrepreneurship
OECD review of experimental evidence on what generative AI improves and where its limits remain.
AI and the euro area economy
ECB speech summarizing the range of current productivity estimates linked to AI adoption.
AI adoption, productivity and employment: evidence from European firms
BIS working paper on firm-level productivity gains from AI adoption and their uneven distribution.
Generative AI at Work
NBER paper on measured productivity gains from AI assistance in customer support.
Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
METR study showing that AI tools slowed experienced open-source developers in one controlled setting.
Generative AI changes how employees spend their time
MIT Sloan summary of research on how coding assistants shift work patterns rather than simply speeding everything up.
Three-quarters of AI’s economic gains are being captured by just 20% of companies – with the leading companies focused on growth, not just productivity
PwC’s 2026 study on how AI gains are concentrating among a minority of firms.
Q1 2026 Shatters Venture Funding Records As AI Boom Pushes Startup Investment To $300B
Crunchbase’s data-led report on the concentration and scale of AI venture financing in early 2026.
Artificial Intelligence
IMF’s overview of AI’s macroeconomic stakes for productivity, labor markets, and inequality.
The Global Impact of AI: Mind the Gap
IMF working paper on why AI’s gains are likely to be uneven across countries and sectors.















