Alphabet CEO Sundar Pichai has clarified the company's strict hierarchy for allocating scarce AI Compute resources. The primary priority is securing capacity for Google DeepMind to trAIn frontier models. Remaining resources are distributed among Search, YouTube, and Google Cloud based on a Return on Invested Capital (ROIC) Framework. Pichai acknowledged that current supply constraints are limiting potential Cloud Revenue growth.
To mitigate these supply pressures, Google is executing a strategic shift by directly selling TPU hardware to select external customers for deployment in their own data centers—a first in the chip's decade-long history. Target CLIents include Hudson River Trading, Thinking Machines Lab, and Boston Dynamics. CFO Anat Ashkenazi noted that while these hardware agreements contribute to the cloud backlog, the majority of revenue reCognition is expected in 2027.
This strategic pivot coincides with surging demand: API token processing for models like Gemini has grown 60% quarter-over-quarter to 16 billion tokens per minute. Additionally, over the past year, 330 cloud customers have each processed more than 1 trillion tokens, underscoring the rapid scaling of enterprise AI adoption.
Comments & Questions (0)
No comments yet
Be the first to comment!