PARAM Rudra C-DAC, Delhi

About PARAM Rudra

PARAM Rudra, a cutting-edge supercomputing facility, was established under Phase-3 of the National Supercomputing Mission's build approach. It boasts a peak computing power of 200 TFLOPS and was designed and commissioned by C-DAC to meet the computational needs of C-DAC Delhi, and various research and engineering institutes in the region. The system is valuable for research in various scientific domains, including materials science, earth science, chemical and biological sciences, high energy physics, cosmology and astrophysics and more.

PARAM Rudra Details

System Specifications
Theoretical Peak Floating-point Performance Total (Rpeak) 200 TFLOPS
Base Specifications (Compute Nodes) 2 X Intel Xeon GOLD 6240R, 24 Cores, 2.4 GHz Processors per node, 192 GB Memory, 800 GB SSD
Master/Service/Login Nodes 4 nos.
CPU only Compute Nodes (Memory) 36 nos. (192GB)
GPU Compute Nodes (GPU Cards) 2 nos. (4 Nvidia A100 PCIe)
Total Memory 8064 TB
Interconnect Primary:100Gbps Mellanox Infiniband
Network 100% non-blocking, fat tree topology
Secondary: 1G Ethernet Network Management
Network: 1G Ethernet
Storage 50 TB
CPU Only Compute Nodes
Nodes 36
Cores 1728
Compute Power of Rpeak 132.69 TFLOPS
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz processors
192 GB memory
800 GB SSD
GPU Only Compute Nodes
Nodes 2
CPU Cores 96
Rpeak 68.97 PF
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz, processors
192 GB Memory
2 x NVIDIA A100
800 GB SSD
Architecture Diagram:


Software Stack:

Installed Applications/Libraries

HPC Applications
  • Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
  • Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
  • CFD: OpenFOAM, SU2
  • Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
  • Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
  • cuDNN, TensorFlow, Theano
Dependency Libraries
  • NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW

Support

For any support, contact: hpc-delhi@cdac.in

PARAM Rudra Usage Report

Link To Be Added

*Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)

PARAM Rudra SN Bose, Kolkata

About PARAM Rudra

PARAM Rudra, a cutting-edge supercomputing facility, was established under Phase-3 of the National Supercomputing Mission's build approach. It boasts a peak computing power of 838 TFLOPS and was designed and commissioned by C-DAC to meet the computational needs of S. N. Bose National Centre for Basic Sciences (SNBNCBS), Kolkata, and various research and engineering institutes in the region. The system is valuable for research in various scientific domains, including materials science, earth science, chemical and biological sciences, high energy physics, cosmology and astrophysics and more.

System Specifications
Theoretical Peak Floating-point Performance Total (Rpeak) 838 TFLOPS
Base Specifications (Compute Nodes) 2X Intel Xeon GOLD 6240R, 24 Cores, 2.4 GHz Processors per node, 192 GB Memory, 800 GB SSD
Master/Service/Login Nodes 10 nos.
CPU only Compute Nodes (Memory) 96 nos. (192GB)
GPU Ready Nodes (Memory) 26 nos. (192GB)
High Memory Compute Nodes 32 nos. (768GB)
Total Memory 49.536 TB
Interconnect Primary: 100Gbps Mellanox Infiniband Interconnect
Network 100% non-blocking, fat tree topology
Secondary: 10G/1G Ethernet Network Management
Network: 1G Ethernet
Storage 1.0 PiB
CPU Only Compute Nodes
Nodes 96
Cores 4608
Compute Power of Rpeak 353.89 TFLOPS
Each Node with 2 X Intel Xeon GOLD 6240R, 24 Cores, 2.4 GHz
192 GB memory
800 GB SSD
GPU Only Compute Nodes
Nodes 8
CPU Cores 384
Rpeak 275.81 TFLOP
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz
192 GB Memory
2 x NVIDIA A100
800 GB SSD
High Memory Compute Nodes
Nodes 32
CPU Cores 1536
Compute Power of Rpeak 117.964 TFLOPS
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz
768 GB Memory
800 GB SSD
GPU Ready Nodes
Nodes 26
CPU Cores 1248
Rpeak 95.846 TFLOP
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz
192 GB Memory
2 x NVIDIA A100
800 GB SSD

PARAM Rudra Details

Architecture Diagram:


Software Stack:

Installed Applications/Libraries

HPC Applications
  • Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
  • Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
  • CFD: OpenFOAM, SU2
  • Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
  • Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
  • cuDNN, TensorFlow, Theano
Dependency Libraries
  • NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW

Support

For any support, contact: rudrasupport@bose.res.in

PARAM Rudra Usage Report

Link To Be Added

*Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)

PARAM Rudra IIT, Bombay

About PARAM Rudra

PARAM Rudra, a state-of-the-art supercomputing facility, was developed as part of Phase-3 of the National Supercomputing Mission's build initiative. Built using Rudra servers, which were designed and manufactured in India, it boasts a peak performance of 3.1 PFLOPS. The system was designed and deployed by C-DAC to address the computational needs of IIT, Bombay, and other research and engineering institutions in the region. It plays a crucial role in advancing research across multiple scientific fields, with a special focus on enhancing studies in material science and atomic physics.

PARAM Rudra Details

System Specifications
Theoretical Peak Floating-point Performance Total (Rpeak) 3 PFLOPS
Base Specifications (Compute Nodes) 2 X Intel Xeon GOLD 6240R, 24 Cores, 2.4 GHz Processors per node, 192 GB Memory, 800 GB SSD
Master/Service/Login Nodes 20 nos.
CPU only Compute Nodes (Memory) 482 nos. (192GB)
High Memory Nodes (Memory) 90 nos. (768GB)
GPU Compute Nodes (GPU Cards) 30 nos. (60 Nvidia A100 PCIe)
Total Memory 163.5 TB
Interconnect Primary: Mellanox Infiniband NDR Interconnect
Network 100% non-blocking, fat tree topology
Secondary: 10G/1G Ethernet Network Management
Network: 1G Ethernet
Storage 4.4 PiB
CPU Only Compute Nodes
Nodes 482
Cores 23136
Compute Power of Rpeak 1776.6 TFLOPS
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz processors
192 GB memory
800 GB SSD
GPU Only Compute Nodes
Nodes 30
CPU Cores 1440
Rpeak 1.034 PF
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz, processors
192 GB Memory
2 x NVIDIA A100
800 GB SSD
High Memory Compute Nodes
Nodes 90
Cores 4320
Compute Power of Rpeak 331.74 TFLOPS
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz processors
768 GB memory
800 GB SSD
Architecture Diagram:


Software Stack:

Installed Applications/Libraries

HPC Applications
  • Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
  • Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
  • CFD: OpenFOAM, SU2
  • Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
  • Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
  • cuDNN, TensorFlow, Theano
Dependency Libraries
  • NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW

Support

For any support, contact: rudrasupport@iitb.ac.in

PARAM Rudra Usage Report

Link To Be Added

*Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)

PARAM Rudra GMRT, Pune

About PARAM Rudra

PARAM Rudra, a cutting-edge supercomputing facility, was established under Phase-3 of the National Supercomputing Mission's build approach. It boasts a peak computing power of 1 PFLOPS and was designed and commissioned by C-DAC to meet the computational needs of GMRT Narayangaon, Pune and various research and engineering institutes in the region. The system is valuable for research in various scientific domains, including materials science, earth science, chemical and biological sciences, high energy physics, cosmology and astrophysics and more.

PARAM Rudra Details

System Specifications
Theoretical Peak Floating-point Performance Total (Rpeak) 1.3 PFLOPS
Base Specifications (Compute Nodes) 2 X Intel Xeon GOLD 6240R, 24 Cores, 2.4 GHz Processors per node, 192 GB Memory, 800 GB SSD
Master/Service/Login Nodes 6 nos.
GPU only Compute Nodes (Memory) 15 nos. (192GB)
GPU Compute Nodes (GPU Cards) 45 nos. (90 Nvidia A100 PCIe)
Total Memory 37.68 TB
Interconnect Primary:100Gbps Mellanox Infiniband Interconnect HDR
Network 100% non-blocking, fat tree topology
Secondary: 10G/1G Ethernet Network Management
Network: 1G Ethernet
Storage 2.0 PiB
GPU Only Compute Nodes
Nodes 45
CPU Cores 2160
Rpeak 1386 PF
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz processors
768 GB Memory
2 x NVIDIA A100
800 GB SSD
GPU Ready Nodes
Nodes 15
CPU Cores 720
Rpeak 55.29 PF
Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz processors
192 GB Memory
2 x NVIDIA A100
800 GB SSD
Architecture Diagram:


Software Stack:

Installed Applications/Libraries

HPC Applications
  • Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
  • Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
  • CFD: OpenFOAM, SU2
  • Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
  • Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
  • cuDNN, TensorFlow, Theano
Dependency Libraries
  • NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW

Support

For any support, contact: brahmand@ncra.tifr.res.in

PARAM Rudra Usage Report

Link To Be Added

*Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)

PARAM Yukti

About PARAM Yukti

The supercomputing facility established under the build approach of National Supercomputing Mission with a peak computing power of 1.8 PetaFlops. Param Yukti is designed and commissioned by C-DAC to cater to the computational needs of JNCASR, Bangalore and various Research and Engineering institutes of the region. It is based on Intel Xeon Cascade lake processors, and NVIDIA Tesla V100 and HDR100. The system is built with the latest cutting edge hardware and software technologies. Substantial components utilized to build this system are manufactured and assembled within India, which is a step towards the Make in India initiative of the Government.

PARAM Yukti Details

System Specifications
Theoretical Peak Floating-point Performance Total (Rpeak) 1.8 PFLOPS
Base Specifications (Compute Nodes) 2 X Intel Xeon Cascadelake 8268, 24 Cores, 2.9 GHz, Processors per node, 192 GB Memory, 480 GB SSD
Master/Service/Login Nodes 10 nos.
CPU only Compute Nodes (Memory) 75 nos. (192GB)
GPU Nodes (Memory) 42 nos. (192GB)
High Memory Compute Nodes 39 nos. (768GB)
Total Memory 52.416 TB
Interconnect Primary: 100Gbps Mellanox Infiniband Interconnect network 100% non-blocking, fat tree topology
Secondary: 10G/1G Ethernet Network
Management network: 1G Ethernet
Storage 1PiB PFS based
CPU Only Compute Nodes
Nodes 75
Cores 3600
Compute Power of Rpeak 334.08 TFLOPS
Each Node with 2 X Intel Xeon Cascadelake 8268, 24 cores, 2.9 GHz, processors
192 GB memory
480 GB SSD
GPU Only Compute Nodes
Nodes 42
CPU Cores 1680
CUDA Cores 757760
Rpeak CPU 134.4 TFLOPS + GPU 1154.4 TF
Each Node with 2 X Intel Xeon Skylake 6248, 20 cores, 2.5 GHz, processors
192 GB Memory
4 x NVIDIA V100 SXM2 GPU Cards (32 Nodes)
2 x NVIDIA V100 SXM2 GPU Cards (10 Nodes)
480 GB SSD
High Memory Compute Nodes
Nodes 39
CPU Cores 1872
Compute Power of Rpeak 173.7 TFLOPS
Each Node with 2 X Intel Xeon Cascadelake 8268, 24 cores, 2.9 GHz, processors
768 GB Memory
480 GB SSD


Architecture Diagram:


Software Stack:

Installed Applications/Libraries

HPC Applications
  • Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
  • Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
  • CFD: OpenFOAM, SU2
  • Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
  • Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
  • cuDNN, TensorFlow, Theano
Dependency Libraries
  • NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW

Support

For any support, contact: yuktisupport@jncasr.ac.in

PARAM Yukti Usage Report

Publication

There are a total of 47 publications that have been published using PARAM Yukti till 2023.

PARAM Siddhi-AI

About PARAM Siddhi-AI

The NPSF, C-DAC in Pune has commissioned the fastest HPC/AI system in India, the PARAM Siddhi-AI system, as part of the NSM initiative. This system comprises 336 NVIDIA A100 GPUs and is a dense GPU compute resource used for executing popular AI and HPC workloads. The configuration of the system includes 42 compute nodes of NVIDIA DGX-A100, each having 2 AMD EPYC CPUs, 8 A100 GPUs and 1 TB RAM. The system has a total peak computing capacity of 6.745 Peta Flops (DP) and 210 PF (AI) performance, with 42 compute nodes and 1 login node. PARAM Siddhi-AI system, at C-DAC is aimed to serve as AI / HPC specific Cloud Computing Infrastructure for INDIA subsuming academia, R&D institutes and start-ups. The system is a centralized facility to ensure increased accessibility and utilization as well as ability to support large scale and more diverse R&D projects in the AI and HPC domains and is dedicated to address India Specific Real-Life Problems The facility would also enable storing of India's massive data sets from areas like healthcare, agriculture locally in a high throughput and efficient storage. The use cases for PARAM Siddhi-AI system varies from Big Data Analytics to specialized AI / HPC solutions across multiple domains viz. Healthcare (precision diagnostics, non-invasive diagnostics etc.), Agriculture (precision agriculture, crop infestations, advanced agronomic advisory etc.), weather forecasting, security and surveillance, financial inclusion and other services (fraud detection), infrastructural tools i.e. NLP etc.

PARAM Siddhi-AI Details

    System Specifications
    NVIDIA DGX-A100 Compute Nodes 82 (20992 cpu cores)
    Total host (compute node) memory 82 TB (82 nodes * 1 TB per node)
    NVIDIA A100-40GB Tensor Core GPUs 656 (82 nodes * 8 gpus per node)
    Total GPU Memory 26.24 TB (82 Nodes * 8 GPUs per node * 40 GB Per Node)
    Mellanox 200G HDR InfiniBand Switch having 320 Tb/s aggregate switch throughput (Compute Communication) 800 Ports (20 leafs *40 ports per leaf)
    Mellanox 200G HDR InfiniBand Switches (Storage Delivery) 400 Ports (10 Switches* 40 ports per switch)
    PFS based storage (Network attched) @250 GB/Sec, 4M IOPs 10.5 PIB (2 Tier Storage)
    AIRAWAT-PSAI Compute Node Specification
    Component Specification
    CPU AMD EPYC 7742 64C 2.25GHz
    CPU Cores 128 cores (Dual Socket, each with 64 cores) [256 cores with Hyper-threading].
    L3 Cache 256 MB
    System Memory (RAM) 1 TB
    GPU NVIDIA A100 - SXM4
    GPU Memory 40 GB
    Local Storage 14 TB
    Total No. of GPUs per node 8
    Networking Mellanox ConnectX-6 VPI (Infiniband HDR), 1.6 Tb/Sec.
  • Architecture Diagram:
  • Software Stack:

Support

For any support, contact: airawat-outreach@cdac.in

PARAM Siddhi-AI Usage Report

PARAM Rudra IUAC, Delhi

About PARAM RUDRA

PARAM Rudra, a state-of-the-art supercomputing facility, was developed as part of Phase-3 of the National Supercomputing Mission's build initiative. Built using Rudra servers, which were designed and manufactured in India, it boasts a peak performance of 3.1 PFLOPS. The system was designed and deployed by C-DAC to address the computational needs of IUAC, Delhi, and other research and engineering institutions in the region. It plays a crucial role in advancing research across multiple scientific fields, with a special focus on enhancing studies in material science and atomic physics.

PARAM RUDRA Details

  • System Specifications
    Theoretical Peak Floating-point Performance Total (Rpeak) 3 PFLOPS
    Base Specifications (Compute Nodes) 2 X Intel Xeon Gold 6240R, 24 Cores, 2.4 GHz, Processors per node, 192 GB Memory, 800 GB SSD
    Master/Service/Login Nodes 20 nos.
    CPU only Compute Nodes (Memory) 473 nos. (192GB)
    GPU Nodes (Memory) 30 nos. (60 NVIDIA A100 PCle)
    GPU Ready Compute Nodes (Memory) 32 nos. (192GB)
    High Memory Compute Nodes 64 nos. (768GB)
    Total Memory 148.312 TB
    Interconnect Primary: 100Gbps Mellanox Infiniband Interconnect network 100% non-blocking, fat tree topology
    Secondary: 10G/1G Ethernet Network
    Management network: 1G Ethernet
    Storage 4.4 PiB
    CPU Only Compute Nodes
    Nodes 473
    Cores 22704
    Compute Power of Rpeak 1743.4 TFLOPS
    Each Node with 2 X Intel Xeon Gold 6240R, 24 cores, 2.4 GHz, processors
    192 GB memory
    800 GB SSD
    GPU Only Compute Nodes
    Nodes 30
    CPU Cores 1440
    Rpeak 924.12 TFLOPS
    Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz, processors
    192 GB Memory
    2 x NVIDIA A100
    800 GB SSD
    GPU Ready Compute Nodes
    Nodes 32
    CPU Cores 1536
    Rpeak 117.952 TFLOPS
    Each Node with 2 X Intel Xeon GOLD 6240R, 24 cores, 2.4 GHz, processors
    192 GB Memory
    800 GB SSD
    High Memory Compute Nodes
    Nodes 64
    CPU Cores 3072
    Compute Power of Rpeak 239.50 TFLOPS
    Each Node with 2 X Intel Xeon Gold 6240R, 24 cores, 2.4 GHz, processors
    768 GB Memory
    800 GB SSD
  • Architecture Diagram:
  • Software Stack:
  • Support

    For any support, contact: rudrasupport@iuac.res.in

    PARAM Rudra Usage Report

    Link To Be Added

    *Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)