“Elon Musk Ordered Nvidia to Ship Thousands of AI Chips Reserved for Tesla to Twitter/xAI”, 2024-06-04 ():
Emails circulated inside Nvidia and obtained by CNBC show that Elon Musk told the chipmaker to prioritize shipments of processors to Twitter and xAI ahead of Tesla.
Musk has said he can grow Tesla into a major player in artificial intelligence and that the company is spending heavily on Nvidia’s AI processors.
By ordering Nvidia to let Twitter jump the line ahead of Tesla, Musk delayed the automaker’s receipt of over $0.5b in processors by months.
…On Tesla’s first-quarter earnings call in April 2024, Musk said the electric vehicle company will increase the number of active H100s—Nvidia’s flagship artificial intelligence chip—from 35,000 → 85,000 by the end of this year. He also wrote in a post on Twitter a few days later that Tesla would spend $10 billion this year “in combined training and inference AI.”
But emails written by Nvidia senior staff and widely shared inside the company suggest that Musk presented an exaggerated picture of Tesla’s procurement to shareholders. Correspondence from Nvidia staffers also indicates that Musk diverted a sizable shipment of AI processors that had been reserved for Tesla to his social media company Twitter.
…“Elon prioritizing Twitter H100 GPU cluster deployment at Twitter versus Tesla by redirecting 12k of shipped H100 GPUs originally slated for Tesla to Twitter instead”, an Nvidia memo from December said. “In exchange, original Twitter orders of 12k H100 slated for Jan and June to be redirected to Tesla.”
A more recent Nvidia email, from late April, said Musk’s comment on the first-quarter Tesla call “conflicts with bookings” and that his April post on Twitter about $10 billion in AI spending also “conflicts with bookings and FY 2025 forecasts.” The email referenced news about Tesla’s ongoing, drastic layoffs and warned that headcount reductions could cause further delays with an “H100 project” at Tesla’s Texas Gigafactory.
…Jensen Huang also said, on an earnings call in February, that Nvidia does its best to “allocate fairly and to avoid allocating unnecessarily”, adding “why allocate something when the data center’s not ready?”
In naming customers that are already using Nvidia’s next-generation Blackwell platform, Huang mentioned xAI on the May call alongside 6 of the biggest tech companies on the planet as well as Tesla.
…Musk likes to tout his infrastructure spending at both companies.
At Tesla, Musk has promised to build a $0.5b “Dojo” supercomputer in Buffalo, New York, and a “super dense, water-cooled supercomputer cluster” at the company’s factory in Austin, Texas. The technology would potentially help Tesla develop the computer vision and LLMs needed for robots and autonomous vehicles.
At xAI, which is racing to compete with OpenAI, Anthropic, Google and others in developing generative AI products, Musk is also seeking to build “the world’s largest GPU cluster” in North Dakota, with some capacity online in June, according to an internal Nvidia email from February.
The memo described a “Musk mandate” to make all 100,000 chips available to xAI by the end of 2024. It noted that the LLM behind xAI’s Grok was relying on Amazon and Oracle cloud infrastructure, with Twitter providing additional data center capacity.
The Information previously reported some details of xAI’s data center ambitions.
…At xAI, Musk has also attracted employees away from Tesla, including machine-learning scientist Ethan Knight, and at least 4 other former Tesla employees who had been involved in Autopilot and big data projects there before joining the startup…However, the person said, redirecting a large shipment of chips from Tesla to Twitter is extreme, given the scarcity of Nvidia’s technology. The decision means the automaker willingly gave up precious time that could have been used to build out its supercomputer cluster in Texas or New York and advance the models behind its self-driving software and robotics.