AI and HPC

Learn more

Unleash the Power of AI and HPC with GPU Systems

To handle the intensive computational workloads necessary for developing, training, and deploying AI and ML models, they require powerful systems such as 421GE-TNRT (Intel-based) and 4125GS-TNRT (AMD-based).


AI and HPC Workloads

Some of the world’s fastest supercomputing clusters are now leveraging GPUs and the power of AI to speed up the process of finding new insights. Scientists, researchers, and engineers are using machine learning algorithms and GPU-accelerated parallel computing in their HPC workloads to get faster results.

HPC workloads need to process large amounts of data with simulations and analytics that have high precision requirements. GPUs like NVIDIA’s H100 offer unmatched performance for double-precision calculations, delivering 60 teraflops per GPU, and enabling high GPU counts and CPU counts in various dense form factors with rack-scale integration and liquid cooling.

Powered by 8x Nvidia L40S/A100/H100, this system provides high-resolution, real-time visualisation, and accelerates scientific simulations and modelling workloads such as:

 Stress Analysis

  • Aerodynamics
  • Device Performance Prediction
  • Fluid Dynamics
  • Research
  • Exploration
  • Weather Prediction

Key Technologies:

  • NVIDIA H100 (SXM, NVL, PCIe), L40S, A100
  • NVIDIA Grace Hopper™ Superchip (Grace CPU and H100) with NVLink® Chip-2-Chip (C2C) interconnect and
  • NVLink Network (up to 256 GPUs)
  • Dual socket Intel and AMD-based solutions with high CPU core counts
  • CPUs integrated with High Bandwidth Memory/bigger L3 cache
  • PCIe 5.0 storage and networking
  • Liquid cooling

Use cases:

  • Manufacturing and engineering simulations (CAE, CFD, FEA, EDA)
  • Bio/life sciences (genomic sequencing, molecular simulation, drug discovery)
  • Scientific simulations (astrophysics, energy exploration, climate modelling, weather forecasting)

NVIDIA GPUs

AI and HPC servers need up to 10 GPUs because they are designed to handle demanding workloads that require parallel computing, such as deep learning, artificial intelligence, scientific computing, data analytics, and more. These workloads involve processing large amounts of data, performing complex calculations, and running simulations that can benefit from the high performance and efficiency of GPUs. GPUs can accelerate these workloads by using thousands of cores to execute multiple tasks simultaneously, unlike CPUs that have fewer cores and execute tasks sequentially.

For unmatched performance and scalability, this server supports up to 10 NVIDIA GPUs with PCIe 5.0 x16 slots and optional NVIDIA NVLink bridges for high-speed interconnects.


L40s

The NVIDIA L40S GPU is built to power the next generation of data centre workloads. It delivers breakthrough multi-workload acceleration for large language model (LLM) inference and training, graphics, and video applications.

  • FHFL DW
  • PCIe 4.0 x16
  • 350W
  • 48GB GDDR6

A100

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest-performing elastic data centres for AI, data analytics, and high-performance computing (HPC) applications.

A100 PCIe A100 SXM
PCIe form factor SXM form factor
NVLink NVLink
PCIe Gen4 PCIe Gen4
300W 400W
80GB HBM2e 80GB HBM2e

 

 H100

The NVIDIA H100 is an integral part of the NVIDIA data centre platform. Built for AI, HPC, and data analytics, the platform accelerates over 3,000 applications, and is available everywhere from data centre to edge, delivering both dramatic performance gains and cost-saving opportunities.

 

H100 SXM5

  • HGX™ H100 SXM5 board with 4-GPU or 8-GPU
  • NVLink & NVSwitch Fabric
  • PCIe 5.0
  • 700W per GPU
  • 80GB HBM3 per GPU

H100 NVL

  • 2 FHFL H100 GPU with NVLink Bridge
  • PCIe 5.0
  • 400W per GPU
  • 94GB HBM3 per GPU

H100 PCIe

  • FHFL
  • PCIe 5.0 x16
  • 350W
  • 80GB HBM2e


Systems


Intel Based: 421GE-TNRT

Powered by the dual 4th Generation Intel Xeon Scalable processors with the most built-in accelerators to improve performance in AI, data analytics, networking, storage, and HPC, this system possesses large processing power and memory capacity for your demanding applications. The 4th Generation Intel Xeon Scalable processors support up to 350W TDP each and 32 DDR5 DIMM slots that can support up to 8TB of 4800MHz ECC memory.

This server has 24 hot-swap 2.5-inch drive bays for storage that accommodate NVMe, SATA, or SAS drives. Different types of drives can be mixed and matched to suit the needs. 2 M.2 NVMe slots are also available for additional storage options.

The 421GE-TNRT also has various features to ensure reliability, security, and manageability. It has 4 redundant 2700W titanium level power supplies, 8 hot-swap heavy-duty fans and optional liquid cooling for optimal thermal performance. Additionally, it is equipped with cryptographically signed firmware, a hardware-trusted platform module, and a silicon root of trust for enhanced security.

Recommended Configuration

Intel Based

  • 2x 8462Y+
  • 24x 32GB DDR5
  • 2x 480GB M.2
  • 2x 3.84TB NVMe
  • 8x Nvidia L40S/A100/H100

 

AMD Based: 4125GS-TNRT

The server is powered by dual AMD EPYC™ 9004 Series (Genoa) processors with up to 128 cores/256 threads and up to 400W TDP each. These processors are based on the Zen 4 architecture and offer exceptional performance, scalability and efficiency for your workloads.

Moreover, this server supports up to 24 hot-swap NVMe/SATA/SAS drive bays, including 4 dedicated NVMe bays, giving you ample storage capacity and speed for your data. You can also use the M.2 slot for additional NVMe storage or boot devices.

Equipped with 4000W redundant titanium-level power supplies, this server ensures high efficiency and reliability. It also features IPMI 2.0 with virtual media over LAN and KVM-over-LAN support for easy remote management and monitoring.

Recommended Configuration

AMD Based

  • 2x AMD 9454
  • 32x 32GB DDR5
  • 2x 480GB M.2
  • 2x 3.84TB NVMe
  • 8z Nvidia L40S/A100/H100

 


Ready to Get Started?

If you are ready to take your AI and HPC to the next level with DiGiCOR GPU Servers, don’t hesitate to contact us today. We are here to help you find the best solution for your needs and budget.

Don’t settle for less. Choose DiGiCOR GPU Servers for AI and HPC today.

Contact Us Today!