Nov. 9, 2021 — NVIDIA has introduced 65 new and updated software development kits — including libraries, code samples and guides — that bring improved features and capabilities to data scientists, researchers, students and developers who are pushing the frontiers of a broad range of computing challenges.
The additions, announced by NVIDIA founder and CEO Jensen Huang during his GTC keynote, include next-gen SDKs for accelerating quantum computing, last-mile delivery algorithms and graph neural network mining.
The company’s catalog of over 150 accelerated computing kits are relied on by the nearly 3 million members in NVIDIA’s Developer Program, a number that has grown 6x over the past five years. CUDA, the parallel computing platform and programming model, was downloaded 7 million times in the last year alone, and currently stands at 30 million since its launch.
Reaching New Markets
Among the new SDKs are:
- NVIDIA ReOpt, for real-time logistics, introduces advanced, massively parallel algorithms that optimize vehicle routes, warehouse selection and fleet mix. Its dynamic rerouting capabilities can reduce travel time, save fuel costs and minimize idle periods, potentially saving billions for the logistics and supply chain industries.
- cuNumeric, for array computing, implements the NumPy application programming interface for automatic scaling to multi-GPU and multi-node systems with zero code changes — a value for the 20 million-strong community of data scientists, researchers and scientists using Python. Available now on GitHub and Conda, it scales to thousands of GPUs, creating accelerated computing for the PyData and NumPy ecosystem.
- cuQuantum, for quantum computing, enables large quantum circuits to be simulated dramatically faster, allowing quantum researchers to study a broader space of algorithms and applications. Developers can simulate areas such as near-term variational quantum algorithms for molecules and error correction algorithms to identify fault tolerance, as well as accelerate popular quantum simulators from Atos, Google and IBM.
- CUDA-X accelerated DGL container, for graph neural networks, offers developers and data scientists working on GNNs with large graphs a quick way to set up a working environment. The container makes it easy to work in an integrated, GPU-accelerated GNN environment combining DGL and Pytorch. With GPU-accelerated GNNs, even the largest graphs in the world, approaching a trillion edges in a single graph, can be mined for insights. For instance, Pinterest uses graph neural networks with billions of nodes and edges to understand their ecosystem of over 300 billion Pins, based on GPUs and optimized libraries for training and inference of models.
”Our team is delighted to collaborate with NVIDIA to accelerate DGL through RAPIDS cuDF for graph construction, RAPIDS cuGraph for graph sampling and custom compute kernels for GNNs,” said Alex Smola, director for Machine Learning at Amazon Web Services. “DGL is open source and also available as a managed service via Amazon NeptuneML.”
Updated SDKs Accelerate Application Development
Enhanced features and upgrades have been made to an array of NVIDIA’s most popular SDKs, including Clara, DLSS, RTX, Nsight and Isaac kits.
Additional updated SDKs include:
- RAPIDS 21.10, for data science, offers new functions to work with time series data and several speedups to existing algorithms. The RAPIDS Accelerator for Apache Spark 3.0 allows enterprises to accelerate their analytics operations on NVIDIA GPUs with no code changes. With RAPIDS downloads having grown by 400 percent this year, this is one of NVIDIA’s most popular SDKs.
- Deepstream 6.0, for intelligent video analytics, introduces a new graph composer interface that makes computer vision accessible to users with minimal coding capability and a visual drag-and-drop interface for simple, intuitive AI product development flow.
- Triton 2.15, TensorRT 8.2 and cuDNN 8.4, for deep neural networks, provides new optimizations for large language models and inference acceleration for gradient-boosted decision trees and random forests.
- DOCA 1.2, for data center networking, offers a zero-trust security framework that extends threat protection through hardware and software authentication, line-rate data encryption, distributed firewall and smart telemetry.
- Merlin 0.8, for recommender systems, has new capabilities for predicting a user’s next action with little or no user data and support for models larger than GPU memory.
New Training Courses for SDKs
The global shortage of full-time developers is expected to increase from 1.4 million in 2021 to 4 million in 2025, according to IDC. The analyst firm believes that the long-term solution for redressing this shortage is to create infrastructure that educates and empowers.
Two new courses from NVIDIA’s Deep Learning Institute support and accelerate developer learning and usage of SDKs, adding to the DLI catalog of 40+ courses.
- Introduction to DOCA for DPUs, available now, is a self-paced course providing developers, researchers and students with the basic concepts of NVIDIA DOCA as an enablement platform for accelerated data center computing on NVIDIA BlueField DPUs.
- Building Real-time Video AI Applications, available later this month, covers the transformation of raw video data into real-time deep learning-based insights, using NVIDIA DeepStream intelligent video analytics and the NVIDIA TAO Toolkit to implement hardware-accelerated components for building a highly performant streaming pipeline.
DLI courses that align with the new SDKs include:
SDKs for Enterprise AI
The NVIDIA AI Enterprise software suite, which includes SDKs such as Triton and RAPIDS, runs on mainstream accelerated servers and is optimized, certified and supported by NVIDIA. Developers can take advantage of the NVIDIA LaunchPad program to experience NVIDIA AI Enterprise in curated labs.
Go to the NVIDIA Developer Zone to learn more.
Source: Ankit Patel, NVIDIA