HPC Domain-specific Workshop 2026

About the Workshop

The National Supercomputing Mission (NSM) has successfully deployed High-Performance Computing (HPC) systems equipped with GPU accelerators at various institutes across the country. These systems offer tremendous computational power, but one of the challenges faced by users is the efficient utilization of the available resources.

In continuation of the previous year’s initiatives, we are excited to announce a domain-specific workshop aimed at exploring the optimization and parallelization techniques for applications running on HPC systems. This workshop will specifically focus on executing user-specific input runs within widely used HPC domains. It presents a unique opportunity for participants to gain hands-on experience with the computational resources available and optimize their workflows for better performance.

To broaden the program’s scope and equip users with insights into the latest trends and advancements in HPC architectures, problem-solving strategies, and emerging technologies, we have incorporated sessions led by industry leaders such as Intel and AMD.

With this, we invite students, researchers, and HPC enthusiasts to actively participate in the workshop. This platform offers an excellent opportunity to collaborate, learn, and enhance your expertise in leveraging HPC systems to solve complex computational problems effectively.

Topics to be covered

The workshop will cover critical domains such as:

  • Molecular Dynamics
  • Computational Fluid Dynamics (CFD)
  • Bio-LLM
  • Weather Forecast
  • Material Science
  • OneAPI, SYCL, SYCLomatic Tool, and OpenMP Offload
  • AMD Technologies – Hardware and Software

These domains represent areas where HPC systems have significant applications and require efficient resource management.

Target Audience

This training program is designed for students and faculty members from institutes that have HPC systems installed under the NSM initiative. It provides a valuable opportunity for them to enhance their skills in utilizing these advanced systems to solve real-world problems in their respective fields.

Prerequisites

Participants should have a foundational understanding of HPC and a solid knowledge of the respective domain(s) they wish to explore. A strong grasp of computational techniques in these areas will be beneficial for engaging fully with the course content.

Venue and Timing

Mode: Online
Dates: Thursday, 19 March 2026 to Thursday, 23 April 2026
Time: 3:00 pm – 6:00 pm

Schedule

Week

Topic/Domain

Date

Time

1

Molecular dynamics

Thursday, 19 March 2026

3 pm – 6 pm

2

Weather Forecast

Thursday, 26 March 2026

3 pm – 6 pm

Computational Fluid Dynamics

Friday, 27 March 2026

3 pm – 6 pm

3

 

Bio-LLM

Thursday, 9 April 2026

3 pm – 6 pm

OneAPI, SYCL, SYCLomatic Tool, and OpenMP Offload 

Friday, 10 April 2026

3 pm – 6 pm

4

AMD tools and Architecture

Thursday, 16 April 2026

3 pm – 6 pm

Materials Science

Friday, 17th April 2026

3 pm – 6 pm

This program offers a structured learning path with weekly sessions, ensuring that participants can gradually build their expertise in optimizing and parallelizing applications across multiple domains.

We look forward to seeing you at this enriching training experience, which will enable you to maximize the potential of HPC systems and solve complex computational challenges.

Registration

Kindly click on the link below to register for the training program.

https://attendee.gotowebinar.com/register/2168160418062803805

Course material and session recording

We recommend attending live sessions as per the schedule to take the full benefit of the HPC training program. However, if anyone misses any live sessions due to some unavoidable circumstances, we will be providing the session material through the HPCShiksha portal.

To access course material and session recordings, kindly visit the module at:

http://hpcshiksha.cdac.in/courses/course-v1:CDAC+DSW2026+2026/about

 
Contact us

In case of any issues related to this training program, kindly contact us at nsm-training@cdac.in

Delhi Technological University (DTU)

Delhi Technological University (DTU)

Training programs conducted
Sr. No. Title Dates Topics Covered Partner Institutes
1 One Week Short Term Training Course on "Introduction to High Performance Computing (HPC) and its Application in Artificial Intelligence (Al) 7 August 2023 - 11 August 2023 Linux Fundamentals for HPC, HPC Fundamentals, Open source tools for High Performance Computing, High Performance Computing for Big Data and Quantum Computing, Deep Learning & its Applications C-DAC Pune
2 Two Week Training Program/Faculty Development Program on High Performance Computing and its Applications in Material Science (HPC-AMS) 6 Oct 2025 - 17 Oct 2025 Introduction to HPC and Architecture, File Systems and Storage in HPC, HPC cluster Access, Environment Setup, Introduction to Linux Shell Scripting for HPC, Open Multi Processing (OpenMP), Parallel Programming Concepts, Job Submission using SLURM Scheduler, Performance Optimization Techniques + HPC Application Installation, LAMMPS, Quantum Espresso, HPC Security and Data Management, Containerization and Virtualization, DFT and application in Material Science, Understanding Magnetism using DFT and Machine Learning, Ground state properties of Materials, DFT Ground state, AI meets DFT, Gaps in Material Science through Supercomputing, Computational Exploration of Quantum Materials, AI in Health Care Analytics, Piezoelectricity and Spin-Orbitronics C-DAC Pune
Other Activities
5

Memorandum of Association (MoA)

Delhi Technological University (DTU) and Centre for Development of Advanced Computing (C-DAC) signed a Memorandum of Association (MoA) on 31 July 2024 to establish DTU as a NSMnodal centre.

Dignitaries Presented: Prof. Prateek Sharma, Hon’ble Vice Chancellor of DTU; Naveen Kumar, Scientist-E, Ministry of Electronics and Information Technology (MeitY), Government of India; Col. Asheet Nath, C-DAC Pune; Ashish Kuvelkar, C-DAC Pune; the Registrar of DTU; Head of Department, CSE, DTU; and Prof. Rahul Katarya, Nodal Officer, DTU

Focus Areas: High-performance Computing (HPC), Artificial Intelligence (AI), Data Science

Useful links
5

Contact information

For any inquiries, feel free to reach out to the nodal centre at the email address below

rahulkatarya@dtu.ac.in
5

Video Repository - DTU Studio

Below is the link to access video repository by DTU

Learning Resources

HPC Resources

5

HPC Awareness Workshop 2026

This workshop offers foundational learning on High-Performance Computing (HPC) concepts, covering essential topics from the basics to build a strong understanding of HPC systems and practices.

5

HPC Awareness Workshop 2024-25

This workshop offers foundational learning on High-Performance Computing (HPC) concepts, covering essential topics from the basics to build a strong understanding of HPC systems and practices.

5

Domain Specific Workshop 2025

This workshop provides focused, in-depth learning on the application of High-Performance Computing (HPC) within specific domains. It is designed to help participants understand how HPC tools, architectures, and methodologies can be effectively applied to real-world domain problems, building both conceptual knowledge and practical skills.

5

Video Repository - CCDS (IIT Kharagpur)

Below is the link to access video repository of Centre for Computational and Data Science, CCDS (IIT Kharagpur)

5

MPI Tutorials (By IIT Goa)

Tutorials related to MPI by IIT Goa can be accessed by clicking the link below.

5

Video Repository - HPC Talks by IIT Goa

Below is the link to access video repository containing HPC talks by IIT Goa .

HPC Awareness Workshop 2026

About

Under the National Supercomputing Mission (NSM), C-DAC is committed to the continuous upskilling of NSM users. Each year, a new cohort of users is onboarded on NSM systems, resulting in a recurring need to orient and train new users. Many of these newcomers initially face challenges in effectively accessing and utilizing the high-performance computing (HPC) systems deployed under NSM.

To address this requirement, C-DAC is organizing a comprehensive HPC Awareness Workshop aimed at building foundational knowledge in core high-performance computing concepts, tools, and technologies.

The workshop is spread over a period of one month, with two sessions conducted every week on Thursdays and Fridays.

The program introduces essential HPC concepts and provides practical exposure to key technologies such as OpenMP, MPI, GPU programming, and efficient usage of HPC resources using job schedulers. Designed for students, professionals, and researchers, the workshop strikes a balance between theoretical understanding and hands-on practice.

Objective
  • To familiarize new NSM users with HPC systems and environments
  • To build foundational knowledge of parallel programming models and tools
  • To enhance user confidence in accessing and efficiently utilizing NSM resources
  • To enable participants to effectively contribute to research and development goals under NSM

This initiative aims to equip participants with the necessary knowledge and skills to navigate HPC systems efficiently and actively support the objectives of the National Supercomputing Mission.

Topics to be covered

The following topics will be covered under this training program:

  • Introduction to HPC
  • HPC Environment setup and cluster access
  • Linux basics
  • SLURM workload manager
  • Domains in HPC and HPC applications installation using source and SPACK
  • Open Multi-Processing (OpenMP)
  • Message Passing Interface (MPI)
  • Introduction to GPUs and CUDA programming
  • OpenACC (Open Accelerators)
  • HPC Profiling tools
  • AI/ML/DL workloads on HPC clusters
Schedule
SessionDateTopicTiming (IST)
115th January, 2026Introduction to HPC3:00 pm to 4:00 pm
Cluster access and environment Setup4:00 pm to 5:00 pm
216th January, 2026Introduction to Linux3:00 pm to 5:00 pm
322nd January, 2026Introduction to parallel programming and OpenMP3:00 pm to 5:00 pm
423rd January, 2026Open Multi-Processing (OpenMP)3:00 pm to 5:00 pm
529th January, 2026Message Passing Interface (MPI)3:00 pm to 5:00 pm
630th January, 2026Message Passing Interface (MPI)3:00 pm to 5:00 pm
75th February, 2026Job Submission using SLURM3:00 pm to 5:00 pm
86th February, 2026HPC Profiling tools3:00 pm to 5:00 pm
912th February, 2026Introduction to GPUs and CUDA Programming3:00 pm to 5:00 pm
1013th February, 2026OpenACC (Open Accelerators)3:00 pm to 5:00 pm
1119th February, 2026Domains in HPC and applications installation using source and SPACK3:00 pm to 5:00 pm
1220th February, 2026AI/ML/DL Workloads on HPC Clusters3:00 pm to 5:00 pm
1326th February, 2026Doubt clearing session3:00 pm to 5:00 pm
Registration

Kindly click on the link below to register for the training program.

https://attendee.gotowebinar.com/register/3448893550933400412

Course material and session recording

We recommend attending live sessions as per the schedule to take the full benefit of the HPC training program. However, if anyone misses any live sessions due to some unavoidable circumstances, we will be providing the session material through the HPCShiksha portal.

To access course material and session recordings, kindly visit the module at:

http://hpcshiksha.cdac.in/courses/course-v1:CDAC+CDACHAW02+2026/about

Contact us

In case of any issues related to this training program, kindly contact us at nsm-training@cdac.in

Mini Hackathon at IIT Madras

Overview of the Hackathon

This Hackathon was conducted as a pilot test run for similar such events in future. The key objective was to optimize, scale and tune the user codes to achieve better performance and/or solve larger problem sizes. The key highlight of this activity was teamwork and cross disciplinary collaboration between Domain Experts and Computer Scientists. This provided a platform for participants to learn new skills and technologies which they will be using along with their domain knowledge. Being a pilot test run, the hackathon was restricted to the Department of Aerospace Engineering (with exception of a few applications from the Department of Mechanical Engineering).

Events Dates and Place

The event was jointly organized by C-DAC and Department of Aerospace Engineering, IIT Madras from 29th July 2024 to 1st August 2024.

Participating Teams

  1. Team HORFID: A hybrid FD/FV high order line-based solver for compressible flows on unstructured grids
  2. FEST-3D: A finite volume solver for the discretized Navier-Stokes equations on block-structured grids
  3. Compressible Multifluid: Used for simulating various compressible multifluid applications. It is based on an unstructured finite volume method
  4. TPS: Used for simulating the internal flow of solid rocket motors
  5. CompSquare: A high order structured compressible flow (CFD) solver used to study internal and external aerodynamics
  6. CFD of low speed reacting flow: Solves for flow (NS), temperature and species along with radiation transfer equation
  7. LABELS: For simulation of incompressible flows, lattice Boltzmann method
  8. Unified Gas Kinetics Scheme: A finite volume solver that solves the two-dimensional BGK-approximated Boltzmann transport equation
  9. Flapping Dynamics: Uses Immersed Boundary Method (IBM) Solver. This IBM-based CFD solver aids in analyzing aerodynamic efficiency, energy harvesting systems and bio-inspired engineering by simulating flapping wing mechanisms of birds and insects. Other code Reduced Order Modeling using Autoencoders implemented with TensorFlow, constructs a neural network-based reduced order model of the high-fidelity CFD data

Pre-Hackathon Activities (2 weeks prior to the event in online mode)

The teams were introduced to their respective mentors. As many of the codes were focussed on OpenACC based GPU implementation, a short online training programme on OpenACC was conducted for the participants.

Following activities were carried out before the actual mini-hackathon:

  1. Get the code compiled and run on the target platform
  2. Select appropriate input test cases and setup code correctness validation mechanism
  3. Get the code profiled with Intel VTune profiler and identify the hotspots

Conclusion

The hackathon was successfully executed, with actively engaging the participants and extending their efforts beyond the event itself. Performance improvements were impressive, with the highest speed-up reaching 386 times, and the lowest at 1.6 times.

Future Work and Suggestions

The hackathon was successfully executed, with actively engaging the participants and extending their efforts beyond the event itself. Performance improvements were impressive, with the highest speed-up reaching 386 times, and the lowest at 1.6 times.

Suggestions for future hackathons:

  • Target more user/ legacy codes on GPU clusters with emerging tools (OpenACC)
  • Identify user codes which can scale on bigger clusters (20 PF) and extend necessary support (including additional system time under NSM)
  • Target codes catering ‘Grand Challenge Problems’
  • Conduct User’s Meets at regular intervals (say monthly) for effective engagement among both domain experts and computer scientists
  • Identify codes which can be catered to quantum computing using hybrid environment

Mini Hackathon at IIT Guwahati

Overview of the Hackathon

This Hackathon was in continuation to a successful execution of a pilot test run at IIT Madras.

The key objective of the hackathon has been to optimize, scale and tune the user codes to achieve better performance and/or solve larger problem sizes. The key highlight of this activity is cross disciplinary collaboration and teamwork between Domain Experts (end-users) and Computer Scientists (mentors). This provided a platform for participants to learn new skills and technologies related to parallel programming which will be useful to take their simulation codes to the next level of computational performance. Applications across various departments were received.

Events Dates and Place

The event was jointly organized by C-DAC and IIT Guwahati from 5th Feb 2025 to 7th Feb 2025.

Participating Teams

In total, 20 teams had applied for this event while 6 of them were selected for participating in the event.

Pre-Hackathon Activities (2 weeks prior to the event in online mode)

The teams were introduced to their respective mentors. As many of the codes were focussed on OpenACC based GPU implementation, a short online training programme on OpenACC was conducted for the participants.

Following activities were carried out before the actual mini-hackathon:

  1. Get the code compiled and run on the target platform
  2. Select appropriate input test cases and setup code correctness validation mechanism
  3. Get the code profiled with Intel VTune profiler and identify the hotspots

Conclusion

The hackathon was successfully executed, with actively engaging the participants and extending their efforts beyond the event itself. Performance improvements were impressive, with the highest speed-up reaching 5832 times, and the lowest at 2 times.

Future Work and Suggestions

Due to limited time during the mini-hackathon, we could not get to a fully optimized version for many codes. Hence, we have requested teams and their respective mentors to continue interacting in online mode till the code gets to reasonable performance.

Suggestions for future hackathons:

  • Target more user/ legacy codes on GPU clusters with emerging tools (OpenACC)
  • Identify user codes which can scale on bigger clusters (20 PF) and extend necessary support (including additional system time under NSM)
  • Target codes catering ‘Grand Challenge Problems’
  • Conduct User’s Meets at regular intervals (say monthly) for effective engagement among both domain experts and computer scientists
  • Identify codes which can be catered to quantum computing using hybrid environment

Walchand College

Walchand College of Engineering (NSM Nodal Centre)

Training programs conducted
Sr. No. Title Dates Topics covered Partner institutes
1 Faculty Development in HPC 8 Apr 2025 - 13 Apr 2025 Faculty Development in HPC -
2 Faculty Development Program on HPC and AI 17 Mar 2025 - 22 Mar 2025 Faculty Development Program on HPC and AI -
3 Faculty Orientation Program and Inauguration of HPC Lab 3 Dec 2025 HPC Lab -
4 Short Term Program (STP) – Walchand College of Engineering, Sangli 1 Aug 2025 - 10 Aug 2025 Linux, OpenMP, MPI, CUDA, AI, GenAI, RAG, NLP in HPC, ML Walchand College of Engineering, Sangli
5 Faculty Development Program (FDP-I) – Vishwakarma Institute of Technology, Pune 23 Jun 2025 - 28 Jun 2025 Introduction to HPC, OpenMP, MPI, CUDA Vishwakarma Institute of Technology, Pune
6 Short Term Program (STP) – Indian Institute of Information Technology, Nagpur 16 Jun 2025 - 25 Jun 2025 Linux, OpenMP, MPI, CUDA, AI, RAG, ML, NLP in HPC, OpenACC Indian Institute of Information Technology, Nagpur
7 Faculty Development Program (FDP-H) – Yashwantrao Bhonsale Institute of Technology, Sawantwadi 26 May 2025 - 31 May 2025 Introduction to HPC, OpenMP, MPI, CUDA Yashwantrao Bhonsale Institute of Technology, Sawantwadi, Sindhudurg District, Maharashtra
8 Faculty Development Program (FDP-H) – SPIT, Mumbai 8 Apr 2025 - 13 Apr 2025 Introduction to HPC, OpenMP, MPI, CUDA, AI Bharatiya Vidya Bhavan's Sardar Patel Institute of Technology (SPIT), Mumbai
9 Faculty Development Program (FDP-H) – Marathwada Mitra Mandal's College of Engineering, Pune 17 Mar 2025 - 22 Mar 2025 Introduction to HPC, OpenMP, MPI, CUDA Marathwada Mitra Mandal's College of Engineering, Pune
10 Faculty Orientation Program (FOP) – Bharati Vidyapeeth College of Engineering, Pune 12 Mar 2025 Introduction to HPC, OpenMP, MPI, CUDA Bharati Vidyapeeth College of Engineering, Pune
11 Faculty Orientation Program (FOP) – Indian Institute of Information Technology, Nagpur 6 Jan 2025 Introduction to HPC, OpenMP, MPI, CUDA Indian Institute of Information Technology, Nagpur (IIITN)
12 Faculty Development Program (FDP) – Indian Institute of Information Technology, Nagpur 10 Jan 2024 - 16 Jan 2024 Introduction to HPC, OpenMP, MPI, CUDA Indian Institute of Information Technology, Nagpur (IIITN)
13 Faculty Orientation Program (FOP) – Karamveer Bhaurao Patil College of Engineering, Satara 14 Dec 2024 Introduction to HPC, OpenMP, MPI Karamveer Bhaurao Patil College of Engineering, Satara
14 Faculty Orientation Program (FOP) – Bharati Vidyapeeth College of Engineering, Kolhapur 8 Feb 2024 - 9 Feb 2024 OpenMP, MPI, CUDA, Introduction to HPC Bharati Vidyapeeth College of Engineering, Kolhapur
Useful links
5

Contact information

For any inquiries, feel free to reach out to the nodal centre at the email address below

dinesh.kulkarni@walchandsangli.ac.in

IIT Palakkad

IIT Palakkad (NSM Nodal Centre)

Training programs conducted
Sr. no. Title Dates Topics Covered Partner Institutes
1 HPC Shiksha - Basics of High Performance Computing 9th November 2020 - 12th February 2021 Computer Architecture for HPC, MPI, CUDA IIT Goa, IIT KGP, IITM
2 AI Shiksha - Introduction to Machine Learning 9th March 2021 - 22nd April 2021 Basic ML Topics, Supervised Learning IIT Goa, IIT KGP, IITM
3 AI Shiksha - Introduction to Deep Learning 28th June 2021 - 13th August 2021 Basics of AI, Neural Network, CNN, Natural Language Processing IIT Goa, IIT KGP, IITM
4 AI Shiksha - Applied Accelerated Artificial Intelligence (AAAI) 31st January 2022 - 1st May 2022 Fundamentals of AI; End to End Accelerated Data Learning; End to End Accelerated Data Science; AI in Industry IIT KGP, IITM, IIT Goa, Nvidia
5 Intel OneAPI 21 July 2022 Introduction to Sycl, oneAPI implementation Intel
6 NPTEL - AAAI 25 Jul 2022 - 14 Oct 2022 Introduction to AI System Hardware, Introduction to Containers, DeepOps, PyTorch, TensorFlow, Fundamentals of Distributed AI Computing, Accelerating neural network inference, Scale Out with DASK, case studies Nvidia
7 ACM India Summer School on "HPC and AI Compute Continuum" 19 June 2023 - 30 June 2023 HPC Basics, OpenMP, MPI, PyTorch, Accelerating NN inference using FPGAs -
8 NPTEL - AAAI 24 Jul 2023 - 13 Oct 2023 Introduction to AI System Hardware, Introduction to Containers, DeepOps, PyTorch, TensorFlow, Fundamentals of Distributed AI Computing, Accelerating neural network inference, Scale Out with DASK, case studies Nvidia
9 Scientific Computing using HPC (at KPRIET, Coimbatore) 11 Jun 2024 - 12 Jun 2024 Introduction to Parallel Computing, Molecular Dynamics and Computational Chemistry Packages Conducted at KPRIET, Coimbatore by faculty of IIT Palakkad
10 Workshop on HOOMD-Blue and OpenMM 15 Feb 2025 Exploring GPU-based molecular simulation tools -
11 Workshop on Ansys 7 Mar 2025 Exploring GPU acceleration of CFD simulations using Ansys -
Internships
Title Dates Project
Summer Internships Jun - Jul 2024 Several topics ranging from automating HPC administration tasks, ab initio simulations, vegetation dynamics, and computational chemistry
Useful links
5

Contact information

For any inquiries, feel free to reach out to the nodal centre at the email address below

sandeepchandran@iitpkd.ac.in

IIT Madras

IIT Madras (NSM Nodal Centre)

Training programs conducted in 2024
Sr. No. Title Dates Topics Covered Partner Institutes
1 Intel Workshop on oneAPI 25 Apr 2024 The Programming Challenges, The oneAPI and AI Saga, GenAI - The Intel Way, LLMs and Diffusion, Hugging Face and Intel Contribution, Demo with Intel Developer Cloud Intel
Training programs conducted in 2023
Sr. No. Title Dates Topics covered Partner institutes
1 CUDA Programming November 13, 2023 Computation, Memory, Synchronization Sri Ramakrishna Engineering College
2 Scientific Computing on GPUs with OpenACC November 4, 2023 Introduction to Parallelization, OpenACC Fundamentals, Numerical Methods -
3 Introduction to HPC October 31, 2023 Fundamentals of HPC, Synchronization, Concurrent Data Structures, Performance Tools, GPU & Heterogeneous Programming, Parallel Algorithm Analysis Mahindra University, Intel Labs, IIT Ropar
4 OpenMP Programming October 21, 2023 OpenMP Fundamentals, Scoping, Atomics, Reductions, Scheduling, Matrix Applications -
5 Programming with Sycl October 16, 2023 Sycl Memory and Program Structure, Task Scheduling, Optimization -
6 Programming AMD GPUs with Hip October 9, 2023 Introduction to AMD GPUs and Hip, Computation, Memory, Synchronization -
7 GPU Computing with MATLAB September 30, 2023 MATLAB Basics, Programming, GPU Integration, CUDA, Memory Models, Case Studies KREA University
8 Intel Workshop on oneAPI February 10, 2023 Sycl, oneAPI Implementation, Program Structure, Unified Shared Memory, Device Selector, Demos Intel
9 HPC Research Week November 20, 2023 Applications in Aerospace, Biology, Chemical, Computer Science, Mathematics, Mechanics Several Indian Institutions
Training programs conducted in 2022
Sr. No. Title Dates Topics covered Partner institutes
1 Mini-course on Concurrent Programming 25 July 2022 Introduction to Concurrent Objects and Linearizability Concepts, Memory Consistency Models, Synchronization Primitives, Locks, Barriers, Concurrent Data Structures, Work Distribution Mahindra University, Intel Labs, IIT Ropar, IIT Roorkee
2 CUDA Programming 2 May 2022 Computation, Memory, Synchronization -
3 GPU Programming with OpenACC 1 Feb 2022 OpenACC Fundamentals, Parallel Constructs, Loop Constructs, Data Transfer Optimization, Vector Operations, Matrix Operations -
4 GPU Programming with CUDA 15 Feb 2022 Computation, Memory, Synchronization -
5 Introduction to GPU Programming 20 Jun 2022 Computation, Memory, Synchronization KLA
6 HPC Symposium on AI and Biology 4 Jan 2022 HPC Fundamentals, HPC in Numerical Computing, Data Science & ML, Computational Biology NCSU, IISER Pune, ICTS-TIFR, IIITDM, Google, IIT Kharagpur, Intel, IISc, University of Brasilia
Training programs conducted in 2021
Sr. No. Title Dates Topics covered Partner institutes
1 HPC Shiksha - Basics of High Performance Computing 9th Nov 2020 – 12th Feb 2021 Computer Architecture for HPC, MPI, CUDA IIT Goa, IIT KGP, IITM
2 AI Shiksha - Introduction to Machine Learning 9th Mar – 22nd Apr 2021 Basic ML Topics, Supervised Learning IIT Goa, IIT KGP, IITM
3 AI Shiksha - Introduction to Deep Learning 28th Jun – 13th Aug 2021 Neural Networks, CNN, NLP, Transformers, Deep Q Learning IIT Goa, IIT Delhi, NVIDIA
4 HPC Workshop 20th Mar 2021 Basics of HPC, OpenMP, GPU Programming IIT Dharwad, IIT Palakkad
5 HPC Workshop on Material and Mechanics 28th Jul 2021 Material modeling, fracture, quantum-mechanical simulations IIT Delhi, IISc, TU Dresden, NCSU, etc.
6 KLA Workshop on AI and HPC in Semiconductor Manufacturing 27th Sep 2021 AI in manufacturing, ML models, GPU sharing KLA
7 Computer Architecture Winter School 27th Dec 2021 RISC-V, Memory, Performance, Design Concepts IITs, IISc, Industry Experts
8 ML for Construction Automation 4th Jun 2021 ML Basics, SVM, ANN, Case Studies University of Sharjah, Cambridge University
9 Introduction to Deep Learning 28th Jun 2021 AI History, Optimization, CNN, Transformers, NLP IIT Goa, NVIDIA, IIT Delhi
Training programs conducted in 2020
Sr. no. Title Dates Topics Covered Partner Institutes
1 Introductory HPC Course 9 Nov 2020 HPC Fundamentals, Shared Memory Programming with OpenMP, Distributed Computing with MPI, GPU Programming with CUDA IIT Goa, CDAC, IIT Kharagpur, IIT Palakkad, IIT Kanpur, NVIDIA, IIT Tirupati
2 HPC Workshop 27 Jul 2020 MPI, OpenMP, GPU Programming, Computational Catalysis, Neuroscience Research, Simulations in Turbomachines, Molecular Dynamics, Inverse Materials Design, Climate Simulations, Clean Energy HPC -
3 Qualcomm Lecture Series 7 Dec 2020 Apache TVM, Halide DSL, Super Block Scheduling, DNN Inference Acceleration, Program Analysis, Scientific Writing Qualcomm
4 HPC CFD Workshop 1 Dec 2020 Multiphase Flows, FSI, CFD Acceleration, Open-source HPC, Turbulent Flow Dynamics, Engineering CFD, Electrohydrodynamics, CFD Workflow IIT Delhi, IIT Bombay, IIT Kanpur, IISc
Interships
Sr. no. Title Dates Technologies Worked On
1 NSM Internship mid-May 2022 ODE, GPU, CFD, DNNs, memory redesign, IIF Solver, dynamic graph algorithms
Useful links
5

Resources

More information about IIT Madras nodal centre and HPC/AI resources can be found on the below link

5

Contact information

For any inquiries, feel free to reach out to the nodal centre at the email address below

rupesh@cse.iitm.ac.in