GPU Specifications
Database
Compare 12 enterprise AI accelerators from NVIDIA and AMD. Complete specifications, performance metrics, and pricing for Blackwell, Hopper, and CDNA architectures.
12 GPU Models
2 Manufacturers
5 Architectures
NVIDIA
9 GPUs
AMD
3 GPUs
Showing 12 GPUs
NVIDIA
Rubin
Rubin R100
Next-Gen ArchitectureMemory
TBD HBM4
Bandwidth
TBD
FP16
TBD
TDP
TBD
NVIDIA
Grace Blackwell
GB300 NVL72
Grace Blackwell SuperchipMemory
288GB HBM3e
Bandwidth
8TB/s
FP4
15 PFLOPS
TDP
1400W
NVIDIA
Grace Blackwell
GB200 NVL72
Grace Blackwell SuperchipMemory
192GB HBM3e
Bandwidth
8TB/s
FP4
20 PFLOPS
TDP
1000W
NVIDIA
Blackwell Ultra
B300
Blackwell Ultra ArchitectureMemory
288GB HBM3e
Bandwidth
8TB/s
FP16
2.5 PFLOPS
TDP
1400W
NVIDIA
Blackwell
B200
Blackwell ArchitectureMemory
192GB HBM3e
Bandwidth
8TB/s
FP16
5 PFLOPS
TDP
1000W
NVIDIA
Blackwell Pro
RTX PRO 6000
Professional BlackwellMemory
96GB GDDR7
Bandwidth
1792 GB/s
CUDA Cores
24,064
TDP
600W
NVIDIA
Blackwell Max-Q
RTX PRO 6000 Max-Q
Mobile ProfessionalMemory
96GB GDDR7
Bandwidth
1792 GB/s
CUDA Cores
24,064
TDP
Optimized
NVIDIA
Hopper
H200 SXM
Hopper + HBM3eMemory
141GB HBM3e
Bandwidth
4.8TB/s
FP8
3,958 TFLOPS
TDP
700W
NVIDIA
Hopper
H100 SXM5
Hopper ArchitectureMemory
80GB HBM3
Bandwidth
3.35TB/s
FP8
3,958 TFLOPS
TDP
700W
AMD
CDNA 4
MI355X
CDNA 4 ArchitectureMemory
288GB HBM3E
Bandwidth
8TB/s
FP16
2.3 PFLOPS
TDP
1400W
AMD
CDNA 3
MI325X
CDNA 3 ArchitectureMemory
256GB HBM3E
Bandwidth
6TB/s
FP16
1,307 TFLOPS
TDP
1000W
AMD
CDNA 3
MI300X
CDNA 3 ArchitectureMemory
192GB HBM3
Bandwidth
5.3TB/s
FP16
1,307 TFLOPS
TDP
750W
Quick Reference
Comparison Table
Side-by-side specifications for all 12 enterprise GPUs
| GPU Model | Architecture | Memory | Bandwidth | AI Performance | TDP | Price Range |
|---|---|---|---|---|---|---|
| Rubin R100 | Rubin | TBD HBM4 | TBD | TBD | TBD | Coming 2026 |
| GB300 NVL72 | Grace Blackwell | 288GB HBM3e | 8TB/s | 15 PFLOPS FP4 | 1400W | $50K-$60K |
| GB200 NVL72 | Grace Blackwell | 192GB HBM3e | 8TB/s | 20 PFLOPS FP4 | 1000W | $40K-$50K |
| B300 | Blackwell Ultra | 288GB HBM3e | 8TB/s | 2.5 PFLOPS FP16 | 1400W | $40K-$50K |
| B200 | Blackwell | 192GB HBM3e | 8TB/s | 5 PFLOPS FP16 | 1000W | $30K-$40K |
| RTX PRO 6000 | Blackwell Pro | 96GB GDDR7 | 1.79TB/s | 5X faster AI (FP4) | 600W | $18K-$25K |
| RTX PRO 6000 Max-Q | Blackwell Max-Q | 96GB GDDR7 | 1.79TB/s | 5X faster AI (FP4) | Optimized | Contact Sales |
| H200 SXM | Hopper | 141GB HBM3e | 4.8TB/s | 3,958 TFLOPS FP8 | 700W | $30K-$40K |
| H100 SXM5 | Hopper | 80GB HBM3 | 3.35TB/s | 3,958 TFLOPS FP8 | 700W | $35K-$40K |
| MI355X | CDNA 4 | 288GB HBM3E | 8TB/s | 2.3 PFLOPS FP16 | 1400W | $20K-$25K |
| MI325X | CDNA 3 | 256GB HBM3E | 6TB/s | 1,307 TFLOPS FP16 | 1000W | $15K-$20K |
| MI300X | CDNA 3 | 192GB HBM3 | 5.3TB/s | 1,307 TFLOPS FP16 | 750W | $10K-$15K |
Recommendations
GPU Selection by Use Case
Find the right accelerator for your workload
LLM Training
100B+ parameter models
LLM Inference
High-throughput serving
Professional AI
Workstation deployment
- RTX PRO 6000 - 96GB, 5X faster inference
- RTX PRO 6000 Max-Q - Mobile workstations
Scientific HPC
Simulation & research
- MI355X / MI300X - Strong FP64 (~80 TFLOPS)
- H100 / H200 - Broad software support
Need Help Selecting the Right GPU?
Our infrastructure architects analyze your workload and recommend optimal GPU configurations with detailed cost projections.