For the love of high performance computing: Attendees worldwide learn supercomputing skills at annual Argonne training program – EurekAlert

ATPESC-group_1600x900

image: Held virtually in August, ATPESC welcomed nearly 80 attendees for two weeks of intensive training on the key skills and tools needed to use supercomputers for science. view more 

Credit: (Image by Argonne National Laboratory.)

Early career scientists from around the world got intensive, hands-on training using supercomputers at the coveted Argonne Training Program on Extreme-Scale Computing.

A love for high performance computing (HPC) and the eagerness to learn the latest about supercomputers attracted nearly 80 attendees this year at the ninth annual Argonne Training Program on Extreme-Scale Computing (ATPESC).

“I view training the next generation of computational scientists as a national priority,” said John Mellor-Crummey, professor of Computer Science and of Electrical and Computer Engineering at Rice University and a lecturer for ATPESC every year since its inception in 2013.

Hosted by the U.S. Department of Energy’s (DOE) Argonne National Laboratory, ATPESC offers training on key skills, approaches and tools needed to design, implement and execute computational science and engineering applications on HPC systems, including upcoming exascale supercomputers. While held virtually for the second year due to the pandemic, it remained a priority for early career scientists who competed for the coveted training opportunity.

“ATPESC is a great venue for the scientific computing community to meet and learn from one another. The ATPESC workshop brings the bleeding tip of the developments in the HPC realm to the attendees, and they get to dip their hands in grease with hands-on sessions.” — Suyash Tandon, a software system design engineer at AMD

Since its start, ATPESC has hosted more than 700 participants from around the world. As part of the program, they are given access to world-class supercomputers at the Argonne Leadership Computing Facility (ALCF), Oak Ridge Leadership Computing Facility and National Energy Research Scientific Computing Center (NERSC). The attendees receive training sessions from leading HPC experts from national laboratories, universities, NASA and companies.

Besides accumulating knowledge and advice from the experts, attendees also got to know the lecturers and their ATPESC peers through a communication channel established on Slack, a platform that allowed them to ask questions and engage in conversations.

“One of the hardest things to do when trying to tackle a complex computational problem is to figure out how to get started,” said Mellor-Crummey. ​“I am certain that ATPESC lowers the barrier for getting started by equipping participants with the knowledge and skills they need to succeed.”

Julio Mendez, a computational fluid dynamics (CFD) engineer at Corrdesa LLC, and a CFD Research Fellow for the Mechanical Engineering department at North Carolina A&T State University, was accepted to ATPESC after applying for the competitive program several times in the past.

“Being at ATPESC 2021 has been an extremely delightful experience,” said Mendez. ​“I have learned so much from the lecturers, but also from the other attendees. It is incredible how this nurturing experience can provide you with another perspective of scientific computation. This demonstrates that the learning experience never ends, and we are very fortunate we can attend this type of event to learn about the new trends and good practices.”

Lisa Claus, a postdoctoral scholar in the Computational Research division at Berkeley Lab, learned how everything is connected in HPC, how all components are dependent on each other and what the future looks like for supercomputers.

“I really enjoyed the day when we learned about OpenMP and the fundamental design patterns of parallel programming,” said Claus. ​“This was a brilliant tutorial for all levels of experience. I learned a lot and the instructor made this session easy to follow.”

Georgia K. Stuart, a postdoctoral fellow of the Computational Hydraulics Group at the University of Texas at Austin, loves all things HPC and sought more education on exascale computing so she can one day go into HPC research support.

“Although I would have loved to see everyone in Chicago, I thought ATPESC 2021 was extremely well organized and executed for being online,” said Stuart. ​“Nearly all the tracks had a hands-on component, which kept me engaged. Also, Slack was indispensable both for asking my own questions and learning from other people’s questions.”

A key takeaway for Christopher Subich, a research scientist for Environment Canada, was that modern libraries and support systems, such as debuggers and profilers, are incredibly full-featured.

“The highlight for me was being able to put some of the information to work immediately,” said Subich. ​“Within a couple of hours of the tutorial on ARM Forge (a cross-platform development tool suite), for example, I had a remote debugging session configured and working with a particularly intricate model I work on in real life.”

The participants always want to know all about the latest HPC technologies, how to leverage them, and what’s coming on the horizon, said Raymond Loy, ATPESC program director and ALCF lead for training, debuggers and math libraries.

“However, that is a moving target,” said Loy. ​“Argonne, as well as the other DOE labs, are at the cutting edge of HPC deployment, so we have the expertise available to adapt the curriculum, for example, to cover quantum computing, artificial intelligence and GPU (graphic processing unit) computation.”

For the past few years, for example, ATPESC has included an entire day focused on training attendees to use machine learning and deep learning methods for science. 

“All too often, students and postdocs join research groups and accumulate an ad hoc knowledge of high performance computing that is mainly focused on the methods already in use by their group,” Loy added. ​“ATPESC provides exposure to a very broad range of topics and serves as a core curriculum. It plays an important role in priming the pipeline of the next generation of computational scientists. Some of our attendees have gone on to lead prominent DOE computing projects or become faculty who have, in turn, sent their students to ATPESC. We even had ATPESC alumni as speakers this year.”

One such alum was Suyash Tandon, a software system design engineer at AMD. He was an ATPESC attendee last year and returned this year as a speaker.

“ATPESC is a great venue for the scientific computing community to meet and learn from one another,” said Tandon. ​“The ATPESC workshop brings the bleeding tip of the developments in the HPC realm to the attendees, and they get to dip their hands in grease with hands-on sessions.”

As part of the training on hardware architectures, attendees learned about DOE’s upcoming exascale supercomputers as well as leading-edge AI platforms, including the Cerebras, SambaNova, Groq and Habana systems being deployed at Argonne. ATPESC provided an opportunity for DOE computing facilities and AI companies to connect with and further educate the community about emerging technologies that are redefining scientific computing.

“We hope that the attendees learned about the differentiation and value that SambaNova’s complete software and hardware solutions bring to running large-scale deep learning and AI for science applications with ease of use at the highest levels of performance, accuracy and scale,” said Marshall Choy, vice president of Product at SambaNova Systems.

While this year’s virtual program was safe for attendees during the pandemic, Loy and his team hope they can return in person next year when ATPESC marks its 10th anniversary.

“While we have continually tuned and updated the curriculum, we will be making a more thorough review in the coming year,” said Loy. ​“Additionally, we are thinking about ways to engage the growing number of ATPESC alumni, which is now up to more than 700 total.”

ATPESC is funded by the Exascale Computing Project, a collaborative effort of the DOE Office of Science and the National Nuclear Security Administration, and organized by staff from the ALCF. The ALCF, OLCF and NERSC are DOE Office of Science User Facilities located at Argonne, Oak Ridge and Lawrence Berkeley national laboratories, respectively.

The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.

Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.

The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://​ener​gy​.gov/​s​c​ience.


Spread the love

Leave a Reply

Your email address will not be published.