Nvidia provides powerful GPUs to support innovative AI developments – SiliconANGLE News

Graphic processing units are hot right now, with stores quickly selling out of stock as consumers engage in crypto markets or play technologically demanding video games.

GPUs, however, are ready to lend themselves to the development of artificial intelligence, with Nvidia Corp. seeking to give AI developers a powerful platform capable of taking on innovative projects.

“AI is the new engine behind search engines,” said Ian Buck (pictured), general manager and vice president for Tesla Data Center Business at Nvidia. “It’s amazing what AI has been demonstrating, what it can do, and its new use cases are showing up all the time.”

Buck spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during AWS re:Invent. They discussed AI innovations, various uses of AI, how Nvidia contributes to AI innovations and more. (* Disclosure below.)

Processing the future

Nvidia is partnered with Amazon Web Services Inc., with AWS being one of the first cloud providers to offer GPUs for the cloud. Recently, Nvidia announced two new instances: the G5 Instance, using up to eight A10G Tensor Core GPUs, supporting Nvidia RTX and ray-tracing technology, and the G5g Instance.

“It’s the first Graviton or Arm-based processor connected to a GPU and successful in the cloud,” Buck said. “ The focus here is Android gaming and machine learning inference. And we’re excited to see the advancements that Amazon is making and AWS is making with Arm in the cloud.”

With the acquisition of British-based chip designer Arm Ltd., Nvidia is safely securing itself in AI history. Arm supplies a vast ecosystem and, together with Nvidia, unlocks opportunities for innovative developments across devices, from personal computers to self-driving cars, from the edge to the cloud.

“By bringing all of Nvidia’s rendering graphics, machine learning and AI technologies to Arm, we can help bring that innovation that Arm allows, that open innovation, because there’s an open architecture, to the entire ecosystem,” Buck said. “ We can help bring it forward to the state of the art in AI machine learning and graphics.”

For the deployment of AI models, Nvidia uses its Triton Inference Server, where developers can deploy models after training and scale them for various GPUs and CPUs.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s coverage of AWS re:Invent. (* Disclosure: Nvidia Corp. sponsored this segment of theCUBE. Neither Nvidia nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

[embedded content]
Photo: SiliconANGLE

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Spread the love

Leave a Reply

Your email address will not be published.