"Elon Musk is superhuman. What would take everyone 4 years only took him 19 days.": NVIDIA CEO lauds xAI's efforts in setting up 100,000 H200 AI GPUs in under 3 weeks

Elon Musk and his Grok AI.
(Image credit: Getty Images | NurPhoto)

What you need to know

  • Elon Musk's xAI team reportedly set up a powerful supercomputer featuring 100,000 H200 Blackwell GPUs in 19 days.
  • NVIDIA CEO Jensen Huang claims the same process would ordinarily take up to 4 years to execute, prompting him to call Elon Musk "superhuman."
  • Huang also indicated that NVIDIA played a significant role in the process, offering their networking expertise to xAI as NVIDIA gear is complicated compared to traditional data center servers.

Earlier this year, Tesla owner Elon Musk announced that xAI has begun training Grok AI with the world's most powerful AI cluster powered by 100,000 liquid-cooled NVIDIA H100 graphic processing units (GPUs) connected with a single RDMA. The cluster could make X's Grok AI chatbot "the world's most powerful AI by every metric" by the end of this year.

And now, the billionaire's vision has seemingly come to fruition. While recently speaking to Tesla Owners in Silicon Valley on X, NVIDIA CEO Jensen Huang lauded Elon Musk's efforts after setting up a supercluster featuring 100,000 H200 Blackwell GPUs in 19 days to power xAI's advances.

For context, Huang describes setting up xAI's supercomputer as a tedious and daunting task that requires well-versed professionals. He further indicated that transitioning the project from the concept phase to the final output could conventionally take up to 4 years — a feat that Elon Musk achieved in 19 days. The CEO breaks down the highlighted time into 3 years of planning and a year of shipping equipment, installation, and getting the entire project to work.

Related: Elon Musk's xAI developed Colossus in just 122 days

According to NVIDIA CEO Jensen Huang:

"As far as I know, there's only one person in the world who could do that; Elon is singular in his understanding of engineering and construction and large systems and marshaling resources; it's just unbelievable."

The process included the construction of infrastructure for the supercomputer, which is fully equipped with liquid cooling and power to support the cutting-edge components, including 200,000 GPUs. NVIDIA CEO Jensen Huang also indicated that his team collaborated closely with xAI during the project across engineering, networking, infrastructure, and software. He admitted that NVIDIA's equipment is rather sophisticated, making the networking process a tad complicated compared to traditional data center servers.

🎃The best early Black Friday deals🦃

More Prime Day deals and anti-Prime Day deals

We at Windows Central are scouring the internet for the best Prime Day deals and anti-Prime Day deals, but there are plenty more discounts going on now. Here's where to find more savings:

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.