fbpx
Elon Musk Hyping Up xAI’s Grok 3 with Massive GPU Investment Elon Musk Hyping Up xAI’s Grok 3 with Massive GPU Investment
Elon Musk is creating a buzz around the next iteration of his AI chatbot, Grok. In a recent post on X... Elon Musk Hyping Up xAI’s Grok 3 with Massive GPU Investment

Elon Musk is creating a buzz around the next iteration of his AI chatbot, Grok. In a recent post on X (formerly Twitter), the billionaire hinted that the latest version of xAI’s chatbot, Grok 3, would be “something special” following its training on 100,000 H100s.

Musk’s reference points to Nvidia’s H100 graphics processing unit, commonly known as Hopper. These GPUs are quite desired for AI development, especially for training large language models. The demand for H100s is skyrocketing in Silicon Valley as tech giants strive to enhance their AI products.

Each Nvidia H100 GPU is estimated to cost between $30,000 and $40,000, which equals a tremendous cost of training Grok 3—potentially between $3 billion and $4 billion. However, the exact nature of Musk’s procurement remains unclear.

It’s possible that xAI could be renting these GPUs from cloud service providers. In fact, reports from May indicate that Musk’s xAI was negotiating with Oracle to spend $10 billion over several years on cloud servers.

Musk’s companies have a history of purchasing H100s outright. Notably, Musk diverted a $500 million shipment of these GPUs from Tesla to X. This significant investment underscores the scale at which Musk’s ventures are operating.

The leap to 100,000 GPUs marks a significant escalation from previous versions. In an April interview with Nicolai Tangen, head of Norway’s sovereign fund, Musk revealed that Grok 2 required around 20,000 H100s for training.

To date, xAI has rolled out Grok-1 and Grok-1.5, with the latest version available only to early testers and existing users on X. Musk announced that Grok 2 would be launched in August, with Grok 3 expected by the end of the year.

While 100,000 GPUs is a substantial number, Musk is not alone in this high-stakes AI arms race. Meta, led by Mark Zuckerberg, has been amassing an even larger stockpile of GPUs. Zuckerberg stated in January that Meta would acquire approximately 350,000 Nvidia H100 GPUs by the end of 2024.

The aim is for a total of around 600,000 GPUs when including other models. This enormous collection translates to an estimated $18 billion investment in AI capabilities. AI infrastructure investment is expected to climb double digits for the foreseeable future.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1