Nvidia (NVDA) made headlines this week at its GTC Conference when it announced it’s building its very first standalone CPU. For a company that made its fortune on the power of its graphics cards, it’s a totally new direction.
And according to CEO Jensen Huang, the superchip, called Grace, is a powerful addition to the company’s lineup.
“This is a new growth market for us,” Huang told Yahoo Finance during an interview.
“The entire data center, whether it’s for scientific computing, or for artificial intelligence training, or inference to the app, the deployment of AI, or data centers at the edge, all the way out to an autonomous system, like a self-driving car, we have data center-scale products and technologies for all of them,” he added.
Grace, named for computer programming pioneer Grace Hopper, features 144 cores and twice the memory bandwidth and energy efficiency of high-end leading server chips, according to Nvidia.
The chip, which Nvidia calls a superchip because it’s two CPUs in one, is specifically designed for use in AI systems, something the company has invested heavily in throughout recent years.
“For the first time, we’re selling CPUs. Today, we connect our GPUs to available CPUs in the market, and we’ll continue to do that. The market is really big — there are a lot of different segments,” Huang said.
“Artificial intelligence or scientific computing, the amount of data that we have to move around is so much. So this gives us the opportunity to offer a revolutionary type of product to an existing marketplace for a new type of application that’s really sweeping computer science.”
In addition to Grace, Nvidia unveiled its new Hopper H100 data center GPU. That system, which packs 80 billion transistors, offers a significant step up in performance compared to its predecessor, the A100 GPU, Nvidia said.
GPUs are important for high-performance computing and AI applications because they can handle multiple processes at the same time. And Nvidia has utilized those capabilities for years.
“If you think about our company today, it’s really a data center scale company. We offer GPUs and systems and software and networking switches,” Huang explained.
“And so the entire data center, whether it’s for scientific computing, or for artificial intelligence, training, or inference to the app, the deployment of AI, or data centers at the edge, or all the way out to an autonomous system, like a self-driving car, we have data center scale products and technologies for all of them.”
But as chips continue to shrink and the number of transistors packed onto each CPU or GPU increases, there’s always the question of whether chip makers like Nvidia are running up against the limits of the silicon that makes up their semiconductors.
Huang, however, says that’s not the case, and that chip makers still have plenty of time thanks to the power of cloud computing.
“It is absolutely the case that transistor scaling is slowing. We’re getting more transistors, but the …pace of advance has slowed tremendously,” Huang explained.
“In the cloud, you can make computers as big as you like. And, in fact, if you look at the computer that we announced today, it has incredible size. For example, 80 billion transistors, we have eight of those chips in one system. And then we take 32 of those systems, and we put it together into one giant GPU, they work like one giant GPU.”
Nvidia, of course, isn’t the only company offering data center GPUs to customers interested in AI and high-performance computing. AMD (AMD) sells its own GPU-powered setup that it claims can easily take on Nvidia’s prior generation data center GPU the A100.
Nvidia, meanwhile, says the Hopper GPU will blow the doors off of the A100. Now we just need to find out how it stacks up to AMD’s offerings.
More from Dan
Got a tip? Email Daniel Howley at email@example.com. Follow him on Twitter at @DanielHowley.