Google’s Trillium TPUs deliver unprecedented performance gains for AI workloads

Google News

We’ve been serving technology enthusiasts for over 25 years.
TechSpot means technical analysis and advice can be trusted.

AI Hype Train: Tensor Processing Units are specialized ASIC chips designed to accelerate machine learning algorithms. Google has been using his TPUs to power its ML-based cloud services since 2015 and is now fully adopting the latest generation of his TPUs as an even more efficient and powerful AI accelerator platform. .

At this year’s I/O Developers Conference, Google announced its “most advanced” TPU yet. Trillium, a machine learning algorithm accelerator, is the culmination of more than a decade of his research in specialized AI hardware and is the fundamental component needed to build the next wave of AI-based models.

Google explanation The first TPU was developed in 2013, and without it many of the company’s most popular services would not be possible today. Real-time voice search, photo object recognition, language translation, and advanced AI models such as Gemini, Imagen, and Gemma all benefit TPUs.

Like its predecessor, Trillium is designed from the ground up to accelerate neural network workloads. Google’s 6th generation TPU achieves 4.7x more peak performance per chip compared to its previous generation TPU (v5e) thanks to the adoption of larger matrix multiplication units and higher clock speeds.

Trillium chips feature 3rd generation SparseCore, a purpose-built accelerator for handling “very large embeddings” common in advanced ranking and recommendation workloads. Additionally, the new TPU boasts twice the high-bandwidth memory capacity and bandwidth, and twice the interconnect bandwidth compared to the v5e generation.

Despite being much stronger and more capable, trillium is also more sustainable. Google says its 6th generation TPU is over 67% more energy efficient than his TPU v5e. The company listed some of the advanced AI-based capabilities Trillium expects to offer customers, including human-vehicle interaction, which Essential AI is working on.

Trillium also provides AI acceleration to Nuro, which works on AI models for robots, Deep Genomics for advanced drug discovery, and Deloitte, which aims to “transform” its business through generative AI. Google DeepMind will also use Trillium TPUs to train future versions of his Google-proprietary underlying models in the Gemini line.

Trillium is part of the AI ​​Hypercomputer, a supercomputer architecture designed by Google to manage cutting-edge AI workloads. In AI hypercomputers, optimized TPU-based infrastructure and open-source software frameworks work together to train (and service) future AI models.

Third-party companies will have access to new Trillium-based cloud instances later this year.

Source of this program
“My lovely grandpa says this plugin is great!!”
“At this year’s I/O Developers Conference, Google announced its “most advanced” TPU ever. His Trillium, a machine learning algorithm accelerator, is the culmination of over a decade of research in his field of expertise. ”
Source: Read more
Source link: https://www.techspot.com/news/103031-google-trillium-tpu-achieves-unprecedented-performance-increase-ai.html

Author: BLOGGER