Last month, Intel®, in collaboration with Analytics India Magazine, successfully concluded the oneAPI AI Analytics workshop to accelerate Python for data science and machine learning on April 22, 2022. The workshop saw more than 200 participants.
The workshop covered extensively the oneAPI AI Analytics toolkit, which contained a core set of tools and libraries for developing high-performance applications on Intel® CPUs, GPUs, and FPGAs, touching upon various deep learning, machine learning and other data analytics tools. The instructors also highlighted Intel® Optimization for PyTorch, which is formed by stock PyTorch and Intel® Extension for PyTorch*
In addition to this, they also explained three major optimisation methodologies, namely vectorisation, parallelism, and memory. They showed how Runtime Extension could speed up simultaneous tasks of inference. The workshop was led by Ritesh R. Kulkarni, the business development and channel management, APJ at Intel, alongside oneAPI certified instructor Krishna Mouli.
Here’s a key highlight of the workshop:
Ritesh started the workshop by addressing the challenges companies face in running data science and ML projects. He said that data-centric hardware needs to be programmed using different programming languages and libraries. It requires maintaining a separate set of codebases, he added. He further said that there is inconsistent tool support across the platform.
“Moreover, developing the software for each kind of hardware platform requires a separate investment, not only in terms of time but also an investment on the tools,” explained Kulkarni, highlighting the major programming challenges companies face while working on multiple architectures. This is where oneAPI comes into the picture.
He said that oneAPI delivers a unified software programming development environment across the CPU and the accelerator architectures. Further, he spoke about the OneAPI industry initiative, which is open to promoting community and industry collaboration, and enabling code reuse across architectures and vendors. “It includes a unified language and libraries that deliver full native code performance. It also delivers native code performance across hardware, including CPUs, GPUs, FPGAs, and AI accelerators,” said Kulkarni.
Further, he said that oneAPI is a foundational programming stack used to optimise middleware and frameworks, which sits on top of it. It includes AI frameworks, such as Tensorflow, PyTorch, and others.
Throwing light on Data-Parallel C++, he said it is oneAPI’s implementation of SYCL. For those unaware, SYCL is a high-level language designed for data-parallel programming productivity. Based on the standard C++ language, DP C++ provides full native high-level language performance on par with C++. “It simplifies code migration from a proprietary language with a programming model familiar to GPU software developers,” he added. In addition to this, he also touched upon various oneAPI tools and libraries. This includes oneMKL, oneVPL, oneDAL, and oneCNN, among others.
Citing Zuse Institute Berlin (ZIB), Kulkarni said they built easyWave tsunami simulation applications using Intel oneAPI. They used DP C++ and other compatibility tools to migrate the code. “It gave a strong performance on Intel architecture and helped them achieve the near-native performance, which is almost 95 per cent compared to the competition,” he added.
Here are some of the openAI accelerator ecosystem partners:
Besides these, Intel also offers commercial toolkits and software development products, which offer worldwide support from Intel technical consulting engineers. Prior commercial tool suites, Intel® Parallel Studio XE and Intel® System Studio transitioned to oneAPI products.
Intel® recently launched its oneAPI 2022 toolkits. It expands cross-architecture features to provide developers greater utility and architectural choke to accelerate computing. The new capabilities include the world’s first unified compiler implementation C++, SYCL, Fortran, data-parallel Python for CPUs and GPUs, advanced accelerator performance modelling and tuning, and performance acceleration for AI way tracing visualisation workloads. Kulkarni said that more than 900 new and enhanced features were added over the last year, strengthening every tool in the foundational and domain-specific toolkit.
“As our hardware roadmap keeps evolving every year, we come with the latest and greatest generations of processors,” said Kulkarni, citing Intel® Advisor, Intel® Distribution for GDB, and Intel® VTune Profiler for designing, debugging and tuning.
Kulkarni said that the oneAPI toolkit is divided into domain-specific toolkits and base toolkit (specifically for high-performance tools for building the DP C++ applications + oneAPI libraries). Domain-specific toolkits include Intel® oneAPI HPC toolkit, Intel® oneAPI IoT toolkit, etc. “As the name suggests, these are used for HPC workloads, IoT, etc.,” he added.
Other domain specific toolkits include Intel® AI Analytics toolkit (toolkit powered by oneAPI), Intel® distribution of OpenVINO toolkit (data scientists and AI toolkits), and Intel® System Bring-Up Toolkit (systems toolkit), typically used for accelerating machine learning and data science pipelines, deploy high-performance inference and debug and tune systems for power and performance.
Intel® oneAPI Base toolkit, on the other hand, is primarily used to accelerate data-centric workloads, including Intel® Distribution for Python (direct programming); Intel® oneAPI Match Kernel Library – oneMKL, Intel® oneAPI Data Analytics Library – oneDAL, oneDNN, and oneCCL (API-based programming); and Intel Advisor (analysis and debug tools).
Hands-on guide into oneAPI toolkit
After the detailed introduction of oneAPI, Krishna Mouli, oneAPI certified instructor, took over the workshop by providing a hands-on guide to accelerate Python for data science and machine learning using oneAPI toolkits.