3rdPartyFeeds News

Nvidia unveils Ampere GPU architecture for AI boost, and the first target is coronavirus

Nvidia Corp. launched its newest line of chips Thursday, detailing artificial-intelligence capabilities up to twenty times greater than previous products, and the new offerings are already working to fight the COVID-19 pandemic. Read More...

Nvidia Corp. launched its newest line of chips Thursday, detailing artificial-intelligence capabilities up to 20 times greater than previous products, and the new offerings are already working to fight the COVID-19 pandemic.

The keynote address of Nvidia’s NVDA, -0.28% GPU Technology Conference was posted online Thursday morning, after the company canceled its annual get-together in March because of the spreading coronavirus. In Thursday’s address — split into six different “episodes” filmed in the kitchen of Chief Executive Jensen Huang’s house — Huang planned to introduce Ampere, the newest architecture for Nvidia’s signature graphics-processing units, or GPUs.

Ampere will eventually replace Nvidia’s Turing and Volta chips with a single platform that streamlines Nvidia’s GPU lineup, Huang said in a pre-briefing with media members Wednesday. While consumers largely know Nvidia for its videogame hardware, the first launches with Ampere are aimed at AI needs in the cloud and for research.

“Unquestionably, it’s the first time that we’ve unified the acceleration workload of the entire data center into one single platform,” Huang said.

Nvidia discovered years ago that its gaming hardware was beneficial to machine learning thanks to its parallel-processing design — when researchers attempt to “teach” algorithms with data, GPUs help to push more of that data through at a faster rate. It has steadily developed products based on those needs for high-performance computing, data centers and autonomous driving since.

See also: Wall Street sees plenty of upside for these downtrodden tech stocks

Ampere, a 7-nanometer processor that holds more than 54 billion transistors, takes the idea of parallel processing and multiplies it — each individual A100 GPU, the first launched with Ampere, can be partitioned to run up to seven different actions or dedicated to a single need, Huang said. The company has bundled eight of those GPUs together into the DGX A100, which can handle up to 56 tasks at once or be combined into one large task, and reach up to 5 petaflops of AI performance.

“Because it’s fungible, you don’t have to buy all these different types and configurations of servers,” Huang said.

The first DGX A100 systems — which start at $200,000 — were delivered to Argonne National Laboratory outside Chicago earlier this month. Researchers working for the Department of Energy at Argonne will use the AI power to study COVID-19 and potential cures for the disease.

“The compute power of the new DGX A100 systems coming to Argonne will help researchers explore treatments and vaccines and study the spread of the virus, enabling scientists to do years’ worth of AI-accelerated work in months or days,” Rick Stevens, associate laboratory director for Computing, Environment and Life Sciences at Argonne, said in a news release.

Nvidia also detailed other COVID-19 approaches by researchers using Nvidia’s machine-learning capabilities, including sequencing the genome of the coronavirus that causes COVID-19 in just seven hours, and screening a billion potential drug combinations in one day. Nvidia also gave updates on its Clara platform for AI in health care, adding AI models that can be used to detect and study infected patients using data from chest scans.

Opinion: Nvidia has become a power broker for the next wave of data-center technology

Many researchers cannot afford a full DGX A100 system, so cloud providers are planning to purchase A100 GPUs and sell access remotely. Nvidia said that 18 cloud-computing providers plan to incorporate A100 chips, including some of the largest in the U.S. and China — Amazon.com Inc.’s AMZN, +0.46% AWS, Microsoft Corp.’s MSFT, -1.51% Azure, Alphabet Inc.’s GOOGL, -1.95% GOOG, -1.92% Google Cloud, as well as cloud offerings from Alibaba Group Holding Ltd. BABA, -0.42% , Baidu Inc. BIDU, -2.03% and Tencent Holdings Ltd. TCEHY, +4.49% .

“I expect Nvidia A100 to be in every single cloud,” Huang said matter-of-factly in Wednesday’s briefing.

Nvidia did not release any information Wednesday about consumer GPUs using Ampere, but when asked by a reporter in the briefing about the difference between enterprise and consumer approaches to Ampere, Huang said “there’s great overlap in the architecture, but not in the configuration.”

Nvidia also announced products meant to be used “at the edge,” which means computing data from sensors. As an example, Nvidia said its EGX A100 offering could manage hundreds of cameras scattered around an airport, while the smaller EGX Jetson Xavier NX would take on a collection of cameras at retail outlets.

Read: Intel and Nvidia turn Merger Monday into a blockbuster sequel

The company’s automotive efforts led to new partnerships, including a collaboration with BMW Group on robotic equipment to help build cars. “BMW Group’s use of Nvidia’s Isaac robotics platform to reimagine their factory is revolutionary,” Huang said in a news release.

Nvidia was worth between $5 billion and $20 billion for almost the entire period between the dot-com bust and 2016, but its advances in machine learning have vaulted it to a much higher valuation in recent years. Shares closed at an all-time high of $322.62 on Monday as the company approaches a $200 billion market capitalization for the first time, ending Wednesday’s session with a valuation of more than $191 billion. Shares have gained 32.3% so far this year despite the COVID-19 pandemic, while the S&P 500 index SPX, -1.74% has declined 12.7%.

Read More

Add Comment

Click here to post a comment