-
17:00
-
16:00
-
10:20
-
10:00
-
09:30
-
16:20
-
15:00
-
12:00
-
12:20
Follow us on Facebook
OpenAI and Google reduce free AI access as infrastructure costs escalate
OpenAI and Google have simultaneously imposed tighter restrictions on free access to their most popular AI tools in response to soaring infrastructure costs. This shift marks a departure from previous periods of subsidized free usage toward encouraging paid subscriptions as demand overwhelms server capacities. OpenAI limited free users of its video generation tool, Sora, to six generations daily, down from more generous allowances, signaling a long-term adjustment rather than a temporary measure. Users now have the option to purchase additional generations if needed. Similarly, Google reduced free daily prompts for its Nano Banana image generator from three to two, and tightened access to its advanced AI model Gemini 3 Pro from five prompts per day to a more variable basic access tier, reflecting ongoing fluctuations in service availability.
The restrictions come amid unprecedented strain on computing infrastructure fueled by urgent consumer interest and corporate adoption. OpenAI partners like Oracle, SoftBank, and CoreWeave have collectively borrowed upwards of $30 billion to build data centers and support increasing AI workloads, with plans underway to finalize an additional $38 billion in loans. Oracle alone has raised $18 billion in bonds, potentially reaching $100 billion in debt over the coming years as it seeks to fulfill OpenAI's compute requirements. These commitments are in line with OpenAI’s extensive compute contracts, amounting to around $1.4 trillion over eight years, which far exceed its anticipated $20 billion in annual revenues.
In a related development, Nvidia's dominance in AI hardware faces a challenge as Google’s Tensor Processing Units (TPUs) gain traction. Meta is reportedly negotiating to deploy Google’s TPUs in its data centers beginning in 2027, posing a threat to Nvidia’s market share, currently exceeding 80%. Google's TPUs offer significant cost advantages, reportedly slashing inference costs for AI applications like Midjourney by 65% compared to Nvidia's H100 GPUs. This shift in hardware preference coincides with stock price pressures on Nvidia, reflecting the broader competitive dynamics reshaping AI infrastructure investments.
These developments spotlight the immense financial and operational demands behind sustaining large-scale AI services, signaling an inflection point in the industry's approach to balancing free access and monetization. As demand continues to surge, both companies appear to be steering users toward paid tiers to support the escalating costs of maintaining cutting-edge AI capabilities.