Enterprise AI Accelerators

GPU Specifications
Database

Compare 12 enterprise AI accelerators from NVIDIA and AMD. Complete specifications, performance metrics, and pricing for Blackwell, Hopper, and CDNA architectures.

12 GPU Models
2 Manufacturers
5 Architectures
NVIDIA 9 GPUs
AMD 3 GPUs
Showing 12 GPUs
NVIDIA Rubin

Rubin R100

Next-Gen Architecture
Memory TBD HBM4
Bandwidth TBD
FP16 TBD
TDP TBD
6th Gen Tensor 2026+
NVIDIA Grace Blackwell

GB300 NVL72

Grace Blackwell Superchip
Memory 288GB HBM3e
Bandwidth 8TB/s
FP4 15 PFLOPS
TDP 1400W
5th Gen Tensor NVLink 1.8TB/s
NVIDIA Grace Blackwell

GB200 NVL72

Grace Blackwell Superchip
Memory 192GB HBM3e
Bandwidth 8TB/s
FP4 20 PFLOPS
TDP 1000W
5th Gen Tensor NVLink 1.8TB/s
NVIDIA Blackwell Ultra

B300

Blackwell Ultra Architecture
Memory 288GB HBM3e
Bandwidth 8TB/s
FP16 2.5 PFLOPS
TDP 1400W
5th Gen Tensor 1.6Tbps Network
NVIDIA Blackwell

B200

Blackwell Architecture
Memory 192GB HBM3e
Bandwidth 8TB/s
FP16 5 PFLOPS
TDP 1000W
5th Gen Tensor NVLink 1.8TB/s
NVIDIA Blackwell Pro

RTX PRO 6000

Professional Blackwell
Memory 96GB GDDR7
Bandwidth 1792 GB/s
CUDA Cores 24,064
TDP 600W
5th Gen Tensor PCIe 5.0
NVIDIA Blackwell Max-Q

RTX PRO 6000 Max-Q

Mobile Professional
Memory 96GB GDDR7
Bandwidth 1792 GB/s
CUDA Cores 24,064
TDP Optimized
5th Gen Tensor Mobile Workstation
NVIDIA Hopper

H200 SXM

Hopper + HBM3e
Memory 141GB HBM3e
Bandwidth 4.8TB/s
FP8 3,958 TFLOPS
TDP 700W
4th Gen Tensor NVLink 900GB/s
NVIDIA Hopper

H100 SXM5

Hopper Architecture
Memory 80GB HBM3
Bandwidth 3.35TB/s
FP8 3,958 TFLOPS
TDP 700W
4th Gen Tensor NVLink 900GB/s
AMD CDNA 4

MI355X

CDNA 4 Architecture
Memory 288GB HBM3E
Bandwidth 8TB/s
FP16 2.3 PFLOPS
TDP 1400W
CDNA 4 ROCm 6.0+
AMD CDNA 3

MI325X

CDNA 3 Architecture
Memory 256GB HBM3E
Bandwidth 6TB/s
FP16 1,307 TFLOPS
TDP 1000W
304 CUs ROCm 6.0+
AMD CDNA 3

MI300X

CDNA 3 Architecture
Memory 192GB HBM3
Bandwidth 5.3TB/s
FP16 1,307 TFLOPS
TDP 750W
304 CUs ROCm 6.0+

Comparison Table

Side-by-side specifications for all 12 enterprise GPUs

GPU Model Architecture Memory Bandwidth AI Performance TDP Price Range
Rubin R100 Rubin TBD HBM4 TBD TBD TBD Coming 2026
GB300 NVL72 Grace Blackwell 288GB HBM3e 8TB/s 15 PFLOPS FP4 1400W $50K-$60K
GB200 NVL72 Grace Blackwell 192GB HBM3e 8TB/s 20 PFLOPS FP4 1000W $40K-$50K
B300 Blackwell Ultra 288GB HBM3e 8TB/s 2.5 PFLOPS FP16 1400W $40K-$50K
B200 Blackwell 192GB HBM3e 8TB/s 5 PFLOPS FP16 1000W $30K-$40K
RTX PRO 6000 Blackwell Pro 96GB GDDR7 1.79TB/s 5X faster AI (FP4) 600W $18K-$25K
RTX PRO 6000 Max-Q Blackwell Max-Q 96GB GDDR7 1.79TB/s 5X faster AI (FP4) Optimized Contact Sales
H200 SXM Hopper 141GB HBM3e 4.8TB/s 3,958 TFLOPS FP8 700W $30K-$40K
H100 SXM5 Hopper 80GB HBM3 3.35TB/s 3,958 TFLOPS FP8 700W $35K-$40K
MI355X CDNA 4 288GB HBM3E 8TB/s 2.3 PFLOPS FP16 1400W $20K-$25K
MI325X CDNA 3 256GB HBM3E 6TB/s 1,307 TFLOPS FP16 1000W $15K-$20K
MI300X CDNA 3 192GB HBM3 5.3TB/s 1,307 TFLOPS FP16 750W $10K-$15K

GPU Selection by Use Case

Find the right accelerator for your workload

LLM Training

100B+ parameter models

  • B300 / GB300 - Highest memory (288GB)
  • MI355X - Competitive alternative (288GB)
  • H200 - Proven reliability (141GB)

LLM Inference

High-throughput serving

Professional AI

Workstation deployment

Scientific HPC

Simulation & research

Need Help Selecting the Right GPU?

Our infrastructure architects analyze your workload and recommend optimal GPU configurations with detailed cost projections.

Reconnecting to the server...

Please wait while we restore your connection

An unhandled error has occurred. Reload 🗙