Your Tech Story

ai supercomputer

Nvidia and Microsoft Collaborate To Build AI Supercomputers

Nvidia and Microsoft are working together over a “multi-year collaboration” to build “one of the most powerful AI supercomputer in the world,” which will be capable of handling the large processing workloads needed to teach and scale AI.

ai supercomputer
Image Source: ciobulletin.com

According to the reports, the AI supercomputer will be driven by Microsoft Azure’s cutting-edge supercomputing technology along with Nvidia GPUs, networking, and a comprehensive stack of AI software to support businesses in training, deploying, and scaling AI, including big, cutting-edge models. 

NVIDIA’s A100 and H100 GPUs will be part of the array, coupled with their Quantum-2 400Gb/s Infiniband networking technology. In particular, this would be the first public cloud to feature NVIDIA’s cutting-edge AI tech stack, allowing businesses to train and use AI on a large scale.

Manuvir Das, VP of enterprise computing at Nvidia, noted, “AI technology advances as well as industry adoption are accelerating. The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups and enabled new enterprise applications. Our collaboration with Microsoft will provide researchers and companies with state-of-the-art AI infrastructure and software to capitalise on the transformative power of AI.”

NVIDIA will work with Azure to research and accelerate advancements in generative AI, a rapidly developing field of artificial intelligence where foundational models like the Megatron Turing NLG 530B serve as the basis for unmonitored, self-learning algorithms to generate new text, digital images, codes, audio, or video.

Additionally, the companies will work together to improve Microsoft’s DeepSpeed deep learning optimization tool. Azure enterprise clients will have access to NVIDIA’s full suite of AI processes and software development tools that have been tailored for Azure. In order to speed up transformer-based models used for huge language models, generative AI, and generating code, among other uses, Microsoft DeepSpeed will make use of the NVIDIA H100 Transformer Engine.

With twice the capacity of 16-bit operations, this technology uses DeepSpeed’s 8-bit floating point accuracy capabilities to significantly speed up AI calculations for transformers.

Verge reports that due to the recent quick expansion of these AI models, there has been a major increase in the demand for robust computer infrastructure.

The partnership is intriguing for several reasons, but notably, because it makes one of the largest computing clusters accessible to companies. Due to optimizations made in the newest ‘Hopper’ generation of NVIDIA GPUs, this will not only enable companies to train and implement AI at a level that was previously impractically pricey to do but also allow them to do it at a significant level of efficiency.

Although NVIDIA has a supercomputer of its own, the collaboration demonstrates that they are aware of the enormous computational demands placed on them by contemporary algorithms. This development represents a collaboration between two of the largest organizations in the AI industry.

Microsoft has experience in this area, as seen by its relationship with OpenAI and dedication to the development of ethical and safe AI. Contrarily, NVIDIA has been a pillar of AI development and research for the past ten years thanks to its potent GPUs and supporting tech stack, which includes CUDA and Tensor cores.

Leave a Comment

Your email address will not be published. Required fields are marked *