Former OpenAI leader Mira Murati’s enterprise, Thinking Machines Lab, has secured a significant agreement to enhance its utilization of Google Cloud’s AI resources, leveraging Nvidia’s newest GPUs, as exclusively revealed by TechCrunch.
Valued in the single-digit billions, the agreement grants access to Google’s advanced AI systems based on Nvidia’s GB300 chips, in addition to infrastructure services for model training and deployment, according to an insider.
Google is vigorously engaging in cloud deals with AI firms, aiming to integrate its cloud services with offerings like storage, Kubernetes, and Spanner. Recently, Anthropic partnered with Google and Broadcom for extensive TPU capabilities.
Competition is intense; Anthropic also recently signed with Amazon for substantial Claude training capacity.
Earlier, Thinking Machines and Nvidia collaborated, marking Nvidia’s investment. However, this Google partnership is the lab’s inaugural cloud provider agreement, non-exclusive, allowing future multi-provider use. It indicates Google’s ambition to secure early partnerships with burgeoning labs.
Murati transitioned from OpenAI’s chief technologist to establish Thinking Machines in February 2025. The company quickly secured a $2 billion seed round, with a $12 billion valuation, and remains discreet. However, it launched Tinker, a frontier AI model automation tool.
Wednesday’s deal offered clarity on Thinking Machines’ work. Google’s release outlined its support for the lab’s reinforcement learning, integral to Tinker. Such learning has fueled breakthroughs at DeepMind and OpenAI, with the Google Cloud agreement reflecting its computational demands.
Thinking Machines is among the first to access GB300-powered systems, providing a significant boost in training and serving speed, says Google.
“Google Cloud enabled record speed with essential reliability,” remarked Myle Ott, Thinking Machines’ founding researcher.
