On Monday, Apple CEO Tim Cook announced a splashy collaboration with OpenAI to put its strong AI model in Siri
However, in the fine print of a technical document Apple published following the event, the company explicitly states that Alphabet’s Google has emerged as another victor in the Cupertino, California, company’s pursuit of catching up in AI.
Apple’s engineers employed its proprietary framework software and a variety of hardware, including its own on-premise graphics processing units (GPUs) and tensor processing units (TPUs) that are exclusively available on Google’s cloud, to construct its foundation AI models.
Google has been developing TPUs for approximately a decade and has publicly disclosed two variants of its fifth-generation chips suitable for AI training. According to Google, the fifth generation’s performance version can outperform Nvidia H100 AI processors.
Google disclosed that a sixth generation would be introduced this year at its annual developer conference.
Google has developed a cloud computing hardware and software platform constructed around the processors, specifically designed to run AI applications and train models.
Apple and Google should have responded promptly to requests for comment.
Apple did not address how much it depended on Google’s processors and software compared to hardware from Nvidia or other AI vendors.
However, using Google’s processors typically necessitates a client acquiring access to them through its cloud division, such as how customers purchase computing time from Amazon’s AWS or Microsoft’s Azure.