Your Tech Story

supercomputers

Are Google’s AI supercomputers faster than Nvidia’s?

As powerful models for machine learning continue to be the hottest topic in the tech business, Google released information about one of the company’s AI supercomputers on Wednesday, claiming it is quicker and more effective than rival Nvidia systems.

Tensor Processing Units, or TPUs, are artificial intelligence (AI) chips that Google has been developing and utilizing since 2016. Nvidia currently holds a 90% share of the overall market for AI training models and deployment.

supercomputers
Image Source: techzine.eu

As a leading innovator in AI, Google has produced several of the most significant developments in the area during the past ten years. However, some people think the company has lagged behind in commercializing its ideas, and internally, the corporation has been rushing to develop items to show it hasn’t wasted its lead, creating a “code red” situation.

Also Read: What’s Next for Users as Google Now Launcher Shuts Down?

A large number of supercomputers and a lot of processors must work simultaneously to train models, with the computers operating nonstop for weeks or months, as is the case with AI models and products like Google’s Bard or OpenAI’s ChatGPT, which are powered by Nvidia’s A100 chips.

On Tuesday, Google announced that it has developed a system with more than 4,000 TPUs connected to specialized parts intended to operate and train AI models. It has been in operation since 2020 and has been used for 50 days to train Google’s PaLM model, which challenges OpenAI’s GPT model.

The Google researchers claimed that the TPU-based supercomputer, known as TPU v4, is “1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100.” The researchers said, “The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models.”

The H100, the most recent Nvidia AI chip, was not compared to Google’s TPU results, however, because the H100 is more modern and was manufactured using more sophisticated manufacturing techniques, according to Google researchers.

Nvidia CEO Jensen Huang said that the findings for the company’s most current chip, the H100, were noticeably faster than those for the previous generation. findings and rankings from an industry-wide AI chip test called MLperf were published on Wednesday.

Given the high cost of the significant computing power required for AI, many in the sector are concentrating on creating new processors, hardware elements like optical links, or software innovations that will lower the required computing power.

Also Read: Can we use nearby share between Android and Windows?

The computational demands of AI also benefit cloud service providers like Google, Microsoft, and Amazon, who may rent out computer processing on an hourly basis and give startup companies credits or computing time to foster business partnerships. For instance, Google claimed that their TPU chips were used to train the AI image generator Midjourney.

Leave a Comment

Your email address will not be published. Required fields are marked *