GPU AI Servers &
HPC Clusters

India's certified NVIDIA infrastructure partner — delivering DGX, HGX, and A100 systems with full-stack HPC integration from fabric to scheduler to parallel storage.

Certified NVIDIA Partner

Direct access to NVIDIA's complete AI infrastructure portfolio. As an authorised NVIDIA partner, Radix Square provides genuine DGX and HGX systems with official warranty, NVIDIA Enterprise Support entitlements, and NGC software access. Our engineers are NVIDIA DGX-certified and can deploy, configure, and optimise your GPU infrastructure from day one.

NVIDIA AI Server Portfolio

Every system we supply is genuine, warranty-backed, and fully integrated before delivery.

Top Seller

NVIDIA DGX H100

GPUs8× H100 SXM5 80 GB
InterconnectNVLink 4.0
GPU Bandwidth3.2 TB/s
Peak FP810 PFLOPS
System RAM2 TB DDR5

Purpose-built for large language model training and fine-tuning at scale. Ideal for enterprises deploying GPT-class models in-house for compliance or latency reasons.

Get Quote →
New

NVIDIA HGX H200

GPUs8× H200 SXM5 141 GB
Memory TypeHBM3e
InterconnectNVLink 4.0
Total GPU Mem1.1 TB
System RAM2 TB DDR5

Designed for billion-parameter inference and mixed-precision training. The 141 GB HBM3e per GPU allows larger models to fit in a single node, drastically reducing multi-node coordination overhead.

Get Quote →
Popular

NVIDIA A100 Cluster Node

Form Factor4U / 8× A100
GPU MemorySXM4 80 GB
InterconnectNVLink 3.0
Peak TF32312 TFLOPS
Network2× 200GbE / IB HDR

The proven choice for multi-node HPC and AI training clusters. Available as individual nodes for scale-out deployments. Cost-efficient for research institutes and mid-size AI teams.

Get Quote →
Inference

L40S Inference Workstation

GPUs1–4× L40S 48 GB
InterfacePCIe Gen 5
WorkloadsInference & Rendering
Form FactorTower / Rack 1–4U
Use CaseLLM serving, VDI

Flexible single-server inference and rendering platform. Ideal for departments needing dedicated GPU capacity without full cluster overhead — AI-assisted design, media rendering, and LLM API serving.

Get Quote →

High-Performance Computing Clusters

Every Radix Square HPC cluster integrates purpose-built compute nodes, a low-latency InfiniBand fabric, high-throughput parallel storage, and a production-grade job scheduler. We offer three reference configuration tiers — each fully customisable to your exact workload profile.

Starter

Research Starter

16

Compute Nodes

InfiniBand HDR fabric, SLURM scheduler, 200 TB Lustre storage. Ideal for university labs and small research teams getting started with HPC workloads.

Enterprise

Enterprise HPC

256+

Compute Nodes

InfiniBand NDR / 400GbE fabric, PBS Pro or SLURM, multi-PB parallel storage, full DR, 24×7 NOC, and SLA-backed managed operations. Purpose-engineered for large-scale AI training.

Platforms We Deploy & Support

SLURM Workload Manager Kubernetes NVIDIA NGC Mellanox InfiniBand Lustre File System IBM Spectrum Scale (GPFS) PBS Pro Kubeflow MLflow NVIDIA Base Command Manager OpenMPI OpenHPC

Configure Your GPU Cluster

Share your workload requirements and our HPC architects will design a cluster specification tailored to your budget and performance targets.