About the program
The National Supercomputing Mission (NSM) has successfully deployed High-Performance Computing (HPC) systems equipped with GPU accelerators at various institutes across the country. These systems offer tremendous computational power, but one of the challenges faced by users is the efficient utilization of the available resources. In continuation of the previous year’s initiatives, we are excited to announce a domain-specific workshop aimed at exploring the optimization and parallelization techniques for applications running on HPC systems. This workshop will specifically focus on executing user-specific input runs within widely used HPC domains. It presents a unique opportunity for participants to gain hands-on experience with the computational resources available and optimize their workflows for better performance. To broaden the program’s scope and equip users with insights into the latest trends and advancements in HPC architectures, problem-solving strategies, and emerging technologies, we have incorporated sessions led by industry leaders such as Intel, NVIDIA, and AMD. With this, we invite students, researchers, and HPC enthusiasts to actively participate in the workshop. This platform offers an excellent opportunity to collaborate, learn, and enhance your expertise in leveraging HPC systems to solve complex computational problems effectively.Topics to be covered
The workshop will cover critical domains such as:- Molecular Dynamics
- Computational Fluid Dynamics (CFD)
- Bio-LLM
- Weather Forecast
- OneAPI, SYCL, SYCLomatic Tool and OpenMP Offload
- AMD Technologies- Hardware and Software
- Computer Vision
- Natural Language Processing (NLP)
- AI for Science – Earth-2
- Material Science
Target Audience
This training program is designed for students and faculty members from institutes that have HPC systems installed under the NSM initiative. It provides a valuable opportunity for them to enhance their skills in utilizing these advanced systems to solve real-world problems in their respective fields.Prerequisites
Participants should have a foundational understanding of HPC and a solid knowledge of the respective domain(s) they wish to explore. A strong grasp of computational techniques in these areas will be beneficial for engaging fully with the course content.Venue and Timing
Dates: 13th January 2025 – 17th March 2025 Frequency: 1 session per week (Every Monday) Time: 4:00 pm – 6:00 pmSchedule
Week | Technology | Topic/Domain | Date | Time |
---|---|---|---|---|
1 | – | Agenda and Overview of Training | 13 January | 4 pm – 6 pm |
– | Layout of the workshop | |||
HPC: CPU-GPU Acceleration | Molecular dynamics | |||
2 | HPC: CPU-GPU Acceleration | Computational Fluid Dynamics | 20 January | 4 pm – 6 pm |
3 | AI for Science | Bio-LLM | Re-scheduled to 24 March | – |
4 | HPC: CPU-GPU Acceleration | Weather Forecast | 3 February | 4 pm – 6 pm |
5 | Intel oneAPI | OneAPI, SYCL, SYCLomatic Tool and OpenMP Offload | 10 February | 4 pm – 6 pm |
6 | AMD tools | AMD Technologies- Hardware and Software | 17 February | 4 pm – 6 pm |
7 | Gen-AI | Computer Vision | 24 February | 4 pm – 6 pm |
8 | Gen-AI | Natural Language Processing | 3 March | 4 pm – 6 pm |
9 | AI for Science | Earth-2 | 10 March | 4 pm – 6 pm |
10 | HPC: CPU-GPU Acceleration | Material Science | 17 March | 4 pm – 6 pm |
11 | AI for Science | Bio-LLM | 24 March | 4 pm – 6 pm |
Registration
Kindly click on the link below to register for the training program. https://attendee.gotowebinar.com/register/9063334165843108448Course material and session recording
We recommend attending live sessions as per the schedule to take the full benefit of the HPC training program. However, if anyone misses any live session due to some unavoidable circumstances we will be providing the session material through the HPCShiksha portal. To access course material and session recordings, kindly visit the module at: http://hpcshiksha.cdac.in/courses/course-v1:CDAC+CDACDSW2025+2025/aboutSlack channel
https://join.slack.com/t/nsmhrd/shared_invite/zt-2v4er8ixx-PfVmtDRtxdHj_Z0RmnzR2w