Nvidia features AI H100 GPU and sparks “World’s Fastest AI Supercomputer” | Technology

Nvidia features AI H100 GPU and sparks "World's Fastest AI Supercomputer" | Technology
Nvidia features AI H100 GPU and sparks “World’s Fastest AI Supercomputer” 

The “World’s Fastest AI Supercomputer’ is powered by Nvidia’s AI H100 GPU

All of these things are built on the company’s new Huber architecture.

Nvidia announced a number of enterprise products focused on artificial intelligence at its annual GTC conference. These include details of his new silicon engineering, Hopper; The first data center built with the GPU architecture, the H100; Grace’s new “ultra-chip” CPU; The company claims it will be the world’s fastest artificial intelligence.

Nvidia has benefited greatly from the AI ​​boom of the past decade, as its GPUs have proven to be a perfect match for the data-intensive method of deep learning. Nvidia says it wants to provide more firepower as the demand for data computing in the field of artificial intelligence increases.

READ MORE: The James Webb teams have successfully aligned their mirrors, and commissioning is continuing | Science and Technology

The company specifically highlighted the popularity of a machine learning system called Transformers. From language models like OpenAI’s GPT-3 to medical systems like DeepMind’s AlphaFold, this approach has yielded amazing results. Over the years, these models have rapidly grown in size. For example, when OpenAI launched GPT-2 in 2019, it had 1.5 billion parameters (or connections). Exactly two years later, when Google trained a similar model, it used 1.6 trillion parameters.

“It will take a few months to train these giant models,” said Paresh Kharia, Nvidia’s senior director of product management, in a press release. “So you fire a job and you wait a month and a half and see what happens. One of the main challenges in reducing training time is that as the number of GPUs in the data center increases, so does the performance.”

READ MORE: New experiment may confirm fifth state of matter in the Universe | Science and Technology

Nvidia says Hopper’s new chassis will help alleviate these difficulties. Named after pioneering computer scientist and American Admiral Grace Hopper, the chassis is designed to train switch models into an H100 GPU 6x faster than previous generation chips, while the new fourth generation Nivida NVlink can deliver up to 256 H100s. GPU 9 times more than the previous generation.

The H100 GPU itself contains 80 billion transistors and is the first GPU to support PCle Gen5 and use HBM3, allowing 3 TB/s of memory bandwidth. Nvidia claims that the H100 GPU is three times faster than the previous generation A100 in FP16, FP32 and FP64 calculations, and six times faster on 8-bit floating-point calculations.

READ MORE: GGWP is a machine learning system that monitors and combats in-game toxicity | Technology

“For training large transformer models, the H100 will provide up to 9 times the performance, which can take weeks to train,” Kharia said.

The company has announced a new data center CPU, the Grace CPU Superchip, which has two CPUs connected directly via the new NVLink-C2C with low latency. The chip is designed to “serve large-scale HPC and AI applications” as well as new Hopper-based GPUs, and can be used on pure CPU systems or GPU-accelerated servers. It has 144 ARM cores and a 1TB/sec memory bandwidth.

In addition to hardware and infrastructure news, Nvidia has also announced updates to its various AI software services, including Maxine (a SDK for improving audio and video designed to run things like Virtual Avatar), Riva (speech recognition SDK), and Speech.

READ MORE: Google is finally able to kill YouTube | Technology

The company has also teased that it is building a new AI supercomputer, which it claims will be the world’s fastest when deployed. The EOS supercomputer will be built using Hopper Architecture and will contain approximately 4,600 H100 GPUs, providing 18.4 exoflops of “AI performance”. The system will only be used for internal Nvidia research and the company says it will be up and running within a few months.

Over the past few years, several companies very interested in AI, including Microsoft, Tesla, and Meta, have created or advertised their own “AI supercomputers” for internal research. These systems cannot be directly compared to regular supercomputers because they operate at a lower resolution, allowing many companies to quickly outperform each other by declaring the world’s fastest speed.

READ MORE: NASA confirms 5,000 exoplanets, a cosmic milestone | Science and Technology 

However, in his keynote speech, Nvidia CEO Jensen Huang said that the Eos 275 will be able to calculate petaFLOPS while running conventional supercomputers – 1.4 times faster than “America’s fastest scientific computer” (Summit). “We hope that EOS will be the world’s fastest artificial intelligence computer,” Huang said. “EOS will be the blueprint for the latest AI infrastructure for our OEM and cloud partners.”

Source: James Vincent, The Verge, Direct News 99 

Leave a Comment