Supermicro 4U X13 HGX H100 8-GPU Liquid Cooled Server
4U liquid cooled 8x NVIDIA H100 SXM 80GB server with DELTA-NEXT GPU baseboard - 1,280 total H100 GPUs available
Certified pre-owned and new AI infrastructure. Servers, GPUs, networking, data center equipment, and power infrastructure with competitive pricing and expert deployment support.
SLYD's GPU server marketplace offers complete systems optimized for AI workloads, from single-GPU development machines to 8-GPU training clusters with NVLink interconnects. Our inventory includes new and certified pre-owned servers from Dell, HPE, Supermicro, Lenovo, and Gigabyte, configured with NVIDIA and AMD accelerators.
Our GPU server inventory spans the full range of AI compute requirements:
Choosing the right GPU server depends on your specific workload, infrastructure, and budget.
Match your GPU to your workload: H100/H200 for training large models, A100 for general-purpose AI, L40S or A10 for inference, and MI300X for AMD-optimized workflows.
Larger models require more VRAM per GPU. H200 offers 141GB HBM3e, H100 provides 80GB HBM3, and A100 is available in 40GB and 80GB configurations.
For multi-GPU training, NVLink provides 900GB/s GPU-to-GPU bandwidth. PCIe systems work well for inference and single-GPU workloads.
Verify your facility can support the thermal design power (TDP) requirements. An 8x H100 SXM system draws approximately 10.2kW under load.
Every GPU server listing includes:
Need help selecting the right configuration? Our infrastructure specialists can recommend systems based on your workload, budget, and timeline. Contact sales for personalized guidance.