About PARAM Utkarsh
PARAM Utkarsh is a High Performance Computing System setup at C-DAC, Bangalore under the National Supercomputing Mission (NSM), Government of India. PARAM Utkarsh is based on Intel Cascade Lake processor and NVIDIA Tesla V100 GPU with 100Gbps infiniband non-blocking interconnect. Equipped with 50,000+ compute cores (CPU & GPU) and liquid cooling system for efficient PUE, PARAM Utkarsh offers peak computing power of 838 Teraflops.
PARAM Utkarsh HPC system is setup at CTSF under the National Supercomputing Mission (NSM) of Government of India. Current setup supports HPC simulations, Big Data Analytics and Cloud services mainly to meet the requirements of the Micro, Small and Medium Enterprises (MSME) sector.
MSME are the backbone of the Indian economy, and many could use high performance computing (HPC) to enhance their business. However, it is challenging for MSME to adopt HPC as part of their operation. They may have little in-house expertise, limited access to HPC specific hardware. To fill this gap only NSM project comes into to play a vital role by making it easier for MSME sectors mainly to support their ideas, simulations, application implementation on top of HPC PARAM Utkarsh resources of C-DAC. The overall objective of PARAM Utkarsh is to enhance the MSME business model into the next level of digital transformation.
PARAM Utkarsh Details
| System Specifications | |
|---|---|
| Theoretical Peak Floating-point Performance Total (Rpeak) | 838 TFLOPS |
| Base Specifications (Compute Nodes) | 2 X Intel Xeon Cascadelake 8268, 24 Cores, 2.9 GHz, Processors per node, 192 GB Memory, 480 GB SSD |
| Master/Service/Login Nodes | 10 nos. |
| CPU only Compute Nodes (Memory) | 107 nos. (192GB) |
| GPU Nodes (Memory) | 10 nos. (192GB) |
| High Memory Compute Nodes | 39 nos. (768GB) |
| Total Memory | 52.416 TB |
| Interconnect | Primary: 100Gbps Mellanox Infiniband Interconnect network 100% non-blocking, fat tree topology Secondary: 10G/1G Ethernet Network Management network: 1G Ethernet |
| Storage | 1PiB PFS based storage |
| CPU Only Compute Nodes | |
|---|---|
| Nodes | 107 |
| Cores | 5136 |
| Compute Power of Rpeak | 476.6 TFLOPS |
| Each Node with | 2 X Intel Xeon Cascadelake 8268, 24 cores, 2.9 GHz, processors 192 GB memory 480 GB SSD |
| GPU Only Compute Nodes | |
|---|---|
| Nodes | 10 |
| CPU Cores | 400 |
| CUDA Cores | 102400 |
| Rpeak | CPU 32 TFLOPS + GPU 156 TF |
| Each Node with | 2 X Intel Xeon Skylake 6248, 20 cores, 2.5 GHz, processors 192 GB Memory 2 x NVIDIA V100 SXM2 GPU Cards 480 GB SSD |
| High Memory Compute Nodes | |
|---|---|
| Nodes | 39 |
| CPU Cores | 1872 |
| Compute Power of Rpeak | 173.7 TFLOPS |
| Each Node with | 2 X Intel Xeon Cascadelake 8268, 24 cores, 2.9 GHz, processors 768 GB Memory 480 GB SSD |
PARAM Utkarsh Architecture Diagram:
Software Stack:
Installed Applications/Libraries
HPC Applications
- Bio-informatics: MUMmer, HMMER, MEME, PHYLIP, mpiBLAST, ClustalW
- Molecular Dynamics: NAMD (for CPU and GPU), LAMMPS, GROMACS
- CFD: OpenFOAM, SU2
- Material Modeling, Quantum Chemistry: Quantum-Espresso, Abinit, CP2K, NWChem
- Weather, Ocean, Climate: WRF-ARW, WPS (WRF), ARWPost (WRF), RegCM, MOM, ROMS
Deep Learning Libraries
- cuDNN, TensorFlow, Theano
Dependency Libraries
- NetCDF, PNETCDF, Jasper, HDF5, Tcl, Boost, FFTW
Support
For any support, contact: utkarsh-support@cdac.in
PARAM Utkarsh Usage Report
Link To Be Added
*Note: The above data is coming from C-Chakshu (Multi Cluster Monitoring Platform)
Publication
There are a total of 39 publications that have been published using PARAM Utkarsh till 2023.