Nvidia’s flagship AI chip reportedly 4.5x faster than the previous champ

Enlarge / A press picture of the Nvidia H100 Tensor Core GPU.

Nvidia introduced yesterday that its upcoming H100 “Hopper” Tensor Core GPU set new efficiency information throughout its debut in the industry-standard MLPerf benchmarks, delivering outcomes as much as 4.5 occasions faster than the A100, which is presently Nvidia’s quickest manufacturing AI chip.

The MPerf benchmarks (technically referred to as “MLPerfTM Inference 2.1”) measure “inference” workloads, which exhibit how effectively a chip can apply a beforehand skilled machine studying mannequin to new knowledge. A bunch of {industry} corporations generally known as the MLCommons developed the MLPerf benchmarks in 2018 to ship a standardized metric for conveying machine studying efficiency to potential prospects.

Nvidia's H100 benchmark results versus the A100, in fancy bar graph form.Enlarge / Nvidia’s H100 benchmark outcomes versus the A100, in fancy bar graph type.


In explicit, the H100 did effectively in the BERT-Large benchmark, which measures pure language-processing efficiency utilizing the BERT mannequin developed by Google. Nvidia credit this explicit end result to the Hopper structure’s Transformer Engine, which particularly accelerates coaching transformer fashions. This signifies that the H100 may speed up future pure language fashions just like OpenAI’s GPT-3, which may compose written works in many alternative types and maintain conversational chats.


Nvidia positions the H100 as a high-end knowledge middle GPU chip designed for AI and supercomputer purposes resembling picture recognition, massive language fashions, picture synthesis, and extra. Analysts count on it to exchange the A100 as Nvidia’s flagship knowledge middle GPU, however it’s nonetheless in improvement. US authorities restrictions imposed final week on exports of the chips to China introduced fears that Nvidia may not be capable to ship the H100 by the finish of 2022 since a part of its improvement is going down there.

Nvidia clarified in a second Securities and Exchange Commission submitting final week that the US authorities will enable continued improvement of the H100 in China, so the challenge seems again on observe for now. According to Nvidia, the H100 will likely be out there “later this year.” If the success of the previous technology’s A100 chip is any indication, the H100 could energy a big number of groundbreaking AI purposes in the years forward.


Please enter your comment!
Please enter your name here

Popular Posts

German government agrees nationalization deal for energy giant Uniper

Uniper has acquired billions in monetary help from the German government on account of surging fuel and electrical costs following Russia's battle in Ukraine.Picture...

We’re buying more shares of two corporations, stepping off the sidelines in this down market

After patiently ready for the market to drag again over the previous few days, we're nibbling on two shares of high-quality corporations.

BTS is heading to Cookie Run: Kingdom and will host a concert

It’s time for BTS Army to get a bit sweeter. The world well-known Ok-pop group BTS will be coming to the cellular...

YouTube will share revenue with Shorts creators as TikTok surges

YouTube's chief product officer Neal Mohan, left, with YouTube stars Cassey Ho, middle, and iJustine, entrance second-right, at Nasdaq on May 5, 2016.Rommel Demano...