Subscribe for notification
Tech

Tesla’s Dojo: Key Milestones

A detailed timeline of Tesla’s Dojo supercomputer, tracing its key milestones and developments in AI and autonomous driving technology

Elon Musk is not satisfied with Tesla’s status as a mere automaker. He envisions Tesla as an AI company that has successfully developed the technology to enable autonomous vehicle operation.

Tesla’s custom-built supercomputer, Dojo, is essential to training its Full Self-Driving (FSD) neural networks. FSD is not entirely autonomous; it can perform specific automated driving duties but still requires an attentive human driver.

However, Tesla believes it can surpass the threshold from nearly self-driving to fully self-driving by accumulating additional data, computing capacity, and training.

And that is where Dojo comes in.

Musk has been hinting at Dojo for some time, but the executive has been intensifying discussions about the supercomputer throughout 2024. Dojo’s significance to Tesla may be existential; investors seek assurances that Tesla can achieve autonomy amid declining EV sales. The timeline of Dojo mentions and promises is provided below.

2019

Initial references to Dojo

April 22, AI team addressed April 22ence at Autonomy Day to discuss the AI that underpins Full Self-Driving and Autopilot. The company discloses details regarding Tesla’s custom-built chips, which are expressly engineered for self-driving vehicles and neural networks.

Musk teases Dojo during the event, disclosing that it is a supercomputer designed for AI training. He also observes that all Tesla vehicles in production at the time would have all the requisite hardware for full self-driving capabilities and would only require a software update.

2020

Musk commences the Dojo roadshow

On February 2, Musk announced that Tesla has over one million connected vehicles worldwide, equipped with the sensors and computation necessary for full self-driving. He also emphasized the capabilities of Dojo.

“Dojo, our training supercomputer, will be capable of processing vast quantities of video training data and efficiently running hyperspace arrays with a vast number of parameters, as well as providing ample memory and ultra-high bandwidth between cores.” Additional information will be provided at a later time

August 1, Musk reiterates Tesla’s to create a neural network training computer, Dojo, capable of processing “truly vast amounts of video data.” He refers to it as “a beast.”

He also stated that the initial iteration of Dojo is “approximately a year away,” which would indicate that its launch date is on or around August 2021.

December 31– Musk asserts that December is 31cessary; however, it will enhance the quality of self-driving for Autopilot to be safer than human drivers; it must ultimately be more than ten times safer.”

2021

Tesla establishes Dojo as an official entity.

The automaker’s inaugural AI Day, designed to attract engineers to Tesla’s AI team, was held on August 1, 19. The event Dojo wasoff Augustt 19nnounced.

Tesla also unveils its D1 processor, which the automaker intends to employ with Nvidia’s GPU to power the Dojo supercomputer. Tesla specifies that its AI cluster will accommodate 3,000 D1 processors.

In October, Musk released a whitepaper, “A Guide to Tesla’s Configurable Floating Point Formats & Arithmetic,” by Dojo Technology.

The whitepaper delineates a technical standard for a novel form of binary floating-point arithmetic employed in deep-learning neural networks. This standard can be implemented “entirely in software, hardware, or any combination of software and hardware.”

2022

Tesla discloses Dojo advancements.

Musk declares that Tesla will gradually implement Dojo on August 1 and August 12, and it will not be necessary until August 12, as many incremental GPUs will be implemented next year.

September 30– At Tesla’s second AI Day, the company disclosed that it had installed the first Dojo cabinet and conducted 2.2 megawatts of load testing.

Tesla claimed it was producing one tile per day, composed of 25 D1 processors. Tesla presents Dojo onstage, utilizing a Stable Diffusion model to generate an AI-generated image of a “Cybertruck on Mars.”

It is crucial to note that the company has announced its intention to construct a total of seven Exapods in Palo Alto, with a target completion date of Q1 2023 for a full Exapod cluster.

2023

A “long-shot bet”

Musk informed investors during Tesla’s first-quarter earnings on April 19 April 19jo “has the potential April 19me a sellable service that we would offer to other companies in the same way that Amazon Web Services offers web services” and “has the potential for an order of magnitude improvement in the cost of training.”

Musk also acknowledges that he would “consider Dojo a potentially risky investment,” but he believes it is a “bet worth taking.”

According to the Tesla AI X account, on June 21 and and June 21s,, neural networks are June 21ly installed in customer vehicles. The thread includes a graph that displays a timeline of Tesla’s current and projected compute capacity.

This graph indicates that the production of Dojo will commence in July 2023, although it is unclear whether this pertains to the D1 chips or the supercomputer itself. That same day, Musk stated that Dojo was operational and conducting duties at Tesla’s data centers.

The company also anticipates that Tesla’s computer will be among the top five in the world by approximately February 2024 (although there is no evidence that this has been achieved) and that Tesla will achieve 100 exaflops by October 2024.

July 19– Tesla notes in its second-quarter earnings report it has started production of Dojo. Musk also says Tesla plans to spend more than $1 billion on Dojo through 2024.

September 6 – Musk posts on X that Tesla is limited by AI training computing but that Nvidia and Dojo will fix that. He says managing the data from the roughly 160 billion frames of video Tesla gets from its cars per day is extremely difficult

He asserts that managing the data generated by the approximately 160 billion frames of video that Tesla receives from its vehicles daily is exceedingly challenging.

2024

Strategies for expansion

Dojo is a high-risk, high-reward endeavor, as Musk reiterated during Tesla’s fourth-quarter and full-year earnings call on January 24.

He also asserts that TesJanuary 24rsuing the dual path of Nvidia and Dojo,” that “Dojo is working,” and that it is “doing training jobs.” He observes that Tesla is expanding it and has “intentions to establish Dojo 1.5, Dojo 2, Dojo 3, and other facilities.”

Tesla disclosed its intention to allocate $500 million toward constructing a Dojo supercomputer in Buffalo on January 26.

Musk then somewhat down January 26nvestment, stating on X that although $500 million is substantial, it is “only equivalent to a 10,000 H100 system from Nvidia.”

Tesla will allocate a more significant sum to Nvidia hardware this year. At this juncture, the implications for remaining competitive in AI are at least several billion dollars annually.

April 30 April 30ding to IEEE, April 30, the next-generation training tile, the D2, is currently in production.

This tile is constructed by placing the entire Dojo tile on a single silicon wafer rather than connecting 25 processors to create a single tile. This information was disclosed at the North American Technology Symposium held by TSMC.

May 20 –May 20 observes that the rear May 20n of the Giga Texas factory extension will feature the construction of a “super dense, water-cooled supercomputer cluster.”

June 4th: Junee 4diverted thousands of Junee 4processors intended for Tesla to X and xAI, according to a report by CNBC.

Musk, who initially denied the claim, stated on X that Tesla could not activate the Nvidia chips because of the ongoing construction of the south extension of Giga Texas.

He stated that the chips would have remained in a warehouse. He observed that the extension will “house 50,000 H100s for FSD training.”

Additionally, he publishes:

“Approximately half of the approximately $10B in AI-related expenditures that I predicted Tesla would incur this year is internal, primarily the Tesla-designed AI inference computer and sensors integrated into all of our vehicles, as well as Dojo.”

NVidia hardware accounts for approximately two-thirds of the cost of constructing AI training superclusters. I currently estimate that Tesla will purchase Nvidia for $3 billion to $4 billion this year.

July 1: July 1 is closed on X, stating that the TeslJuly 1coming AI model may not be compatible with the hardware in current vehicles. He asserts that the next-generation AI’s approximately fivefold increase in parameter count is “extremely challenging to accomplish without upgrading the vehicle inference computer.”

Nvidia’s supply chain challenges

July 23 and July 23 stated during Tesla’s July 23 quarterly earnings call that the demand for Nvidia hardware is “so high that it is often difficult to get the GPUs.”

Musk asserts, “I believe that this necessitates a significant increase in our investment in Dojo to guarantee that we have the necessary training capabilities.” “We perceive a potential avenue to compete with Nvidia through Dojo.”

Tesla’s investor presentation includes a graph that indicates that the company’s AI training capacity will increase from approximately 40,000 GPUs in June to approximately 90,000 H100 equivalent GPUs by the end of 2024.

Musk announced on X later that Dojo 1 will offer “approximately 8k H100-equivalent of online training by the end of the year.”

Additionally, he publishes images of the supercomputer, which appears to have a stainless steel exterior reminiscent of a refrigerator, similar to Tesla’s Cybertrucks.

XXX

In response to a post from an individual who claimed to be establishing a club of “Tesla HW4/AI4 owners angry about being left behind when AI5 comes out,” Musk stated that AI5 is approximately 18 months away from high-volume production.

Musk posts on X on August 3 August 3 conducted a tour of the August 3la supercomputer cluster at Giga Texas (aka Cortex).” He observes that it would consist of approximately 100,000 H100/H200 Nvidia GPUs, which would have “massive storage for video training of FSD & Optimus.”

Hillary Ondulohi

Hillary is a media creator with a background in mechanical engineering. He leverages his technical expertise to craft informative pieces on protechbro.com, making complex concepts accessible to a wider audience.

Disqus Comments Loading...

Recent Posts

Hamster Kombat Introduces Earn Benefits on Telegram Wallet

The trending P2E game Hamster Kombat has introduced a new way for users to earn more for those who withdraw…

7 hours ago

Amazon Releases Video Generator Only for Ads

Like Google, Amazon has released an AI-powered video generator, but it can only do a few things at a time…

11 hours ago

Upchieve Launches Free Tool for Teachers

Upchieve, a free app offering 24/7 college counseling and tutoring for low-income students, introduces a new tool to support teachers…

12 hours ago

Hong Kong to Launch Ethereum ETF Staking by Year-End

The crypto regulators in Hong Kong may launch Ethereum ETF staking by the end of 2024, which could likely give…

13 hours ago

US SEC Seeks Coinbase Lawsuit Discovery Extension

The US SEC is seeking a Coinbase lawsuit discovery extension as they have reached an agreement with Coinbase to shift…

13 hours ago

Worldcoin Launches Face Auth Technology

Worldcoin, a global digital identity and cryptocurrency initiative, has introduced Face Auth, a new security measure for the World ID…

13 hours ago