AI Infrastructure
Power Calculator
Calculate accurate power consumption, cooling loads, electrical infrastructure requirements, and operating costs for your AI GPU deployment. From single workstations to multi-rack data center configurations, plan your facility with confidence.
Executive Summary
Unprecedented Power Density
Modern AI GPUs consume 700W-1,100W each. An 8-GPU server can draw 10kW or more, creating facility challenges that traditional IT infrastructure never faced.
Significant Operating Costs
Electricity represents 30-40% of total AI infrastructure cost over 5 years. Accurate planning prevents budget overruns and identifies optimization opportunities.
Infrastructure Requirements
GPU deployments require proper electrical circuits, cooling capacity, and rack power distribution. Underestimating leads to costly retrofits or capacity constraints.
Plan Before You Deploy
Use this calculator to determine power, cooling, and electrical requirements before procurement. Proper planning ensures smooth deployment and optimal operations.
Understanding AI Infrastructure Power
GPU Thermal Design Power (TDP)
TDP represents the maximum sustained power a GPU can draw under typical workloads. Modern AI accelerators have dramatically higher TDP than consumer GPUs:
- Consumer GPUs: 200W-575W (RTX 4090: 450W, RTX 5090: 575W)
- Workstation GPUs: 72W-350W (L4: 72W, RTX 6000 Pro: 350W)
- Datacenter GPUs: 350W-1,400W (H100: 700W, B200: 1,000W, B300: 1,400W)
- Superchips: 2,700W+ (GB200 NVL72 per tray)
Power Usage Effectiveness (PUE)
PUE measures total facility power divided by IT equipment power. It accounts for cooling, lighting, power distribution losses, and other overhead:
Utilization Impact
GPUs rarely run at 100% TDP continuously. Actual power consumption varies by workload:
- Idle: 20-40% of TDP
- Inference (Light): 50-70% of TDP
- Inference (Heavy): 70-85% of TDP
- Training: 85-100% of TDP
Plan for 85% average utilization for production inference workloads. Training clusters may sustain 95%+ utilization.
Conversion Factors
Essential conversions for power planning:
Power Requirements Calculator
Configure your GPU deployment and get instant power, cooling, and cost estimates
Configuration
Results
Power Consumption
Annual Operating Costs
Cooling Requirements
Electrical Requirements
GPU Power Specifications
Complete power specifications for AI accelerators from NVIDIA, AMD, and Intel
| GPU Model | TDP (W) | Memory | Category | Annual Cost* |
|---|---|---|---|---|
| NVIDIA Consumer | ||||
| RTX 4090 | 450W | 24GB GDDR6X | Consumer | $335/year |
| RTX 5090 | 575W | 32GB GDDR7 | Consumer | $428/year |
| RTX 5080 | 360W | 16GB GDDR7 | Consumer | $268/year |
| RTX 5070 Ti | 300W | 16GB GDDR7 | Consumer | $223/year |
| RTX 5070 | 250W | 12GB GDDR7 | Consumer | $186/year |
| NVIDIA Workstation & Edge | ||||
| RTX 6000 Ada | 300W | 48GB GDDR6 | Workstation | $223/year |
| L4 | 72W | 24GB GDDR6 | Edge/Inference | $54/year |
| RTX 6000 Blackwell Pro | 350W | 96GB GDDR7 | Workstation/Datacenter | $260/year |
| NVIDIA Datacenter (Ampere) | ||||
| A100 40GB | 400W | 40GB HBM2e | Datacenter | $298/year |
| A100 80GB | 400W | 80GB HBM2e | Datacenter | $298/year |
| NVIDIA Datacenter (Hopper) | ||||
| H100 PCIe | 350W | 80GB HBM3 | Datacenter | $260/year |
| H100 SXM | 700W | 80GB HBM3 | Datacenter | $521/year |
| H200 SXM | 700W | 141GB HBM3e | Datacenter | $521/year |
| NVIDIA Datacenter (Blackwell) | ||||
| B100 | 700W | 192GB HBM3e | Datacenter | $521/year |
| B200 | 1,000W | 192GB HBM3e | Datacenter | $744/year |
| B300 | 1,100W | 288GB HBM3e | Datacenter | $819/year |
| GB200 NVL72 | 2,700W/tray | Combined | Superchip | $2,010/year |
| AMD Instinct | ||||
| MI300X | 750W | 192GB HBM3 | Datacenter | $558/year |
| MI325X | 750W | 256GB HBM3e | Datacenter | $558/year |
| MI355X | 900W | 288GB HBM3e | Datacenter | $670/year |
| Intel | ||||
| Gaudi 3 | 900W | 128GB HBM2e | Datacenter | $670/year |
*Annual cost assumes 85% utilization, PUE 1.5, $0.10/kWh, 24/7 operation. Actual costs vary by configuration.
Cooling Requirements
Heat Generation Basics
All electrical power consumed by GPUs converts to heat that must be removed. Use the formula: BTU/hr = Watts × 3.412
- GPU Power: 8 × 700W = 5,600W
- Heat Output: 5,600 × 3.412 = 19,107 BTU/hr
- Cooling Tons: 19,107 ÷ 12,000 = 1.6 tons
Cooling Technology Options
Rack Density Planning
Calculate cooling needs by rack configuration:
| Configuration | Power/Rack | BTU/hr | Cooling Method |
|---|---|---|---|
| 2× 8-GPU H100 servers | ~12 kW | ~41,000 | Air (Hot/Cold Aisle) |
| 4× 8-GPU H100 servers | ~24 kW | ~82,000 | Air + RDHx |
| 4× 8-GPU B200 servers | ~40 kW | ~136,000 | Direct Liquid Cooling |
| DGX GB200 NVL72 | ~120 kW | ~409,000 | Liquid Cooling Required |
Electrical Infrastructure Requirements
Voltage Requirements
AI GPU servers typically require higher voltage power distribution for efficiency:
- 208V 3-phase: Most common in North American data centers
- 240V single-phase: Common for smaller deployments
- 400V 3-phase: European standard, increasingly used in US for efficiency
Higher voltage = lower amperage = smaller conductors = lower infrastructure cost
NEC Compliance (80% Rule)
Per National Electrical Code, continuous loads (running 3+ hours) must not exceed 80% of circuit rating. This means circuits must be sized at 125% of the load:
Common Server Power Requirements
| Server Type | Power Draw | Amps @ 208V | Circuit Needed |
|---|---|---|---|
| 4× RTX 6000 Pro Server | ~2.0 kW | ~10A | 15A circuit |
| 8× A100 Server | ~4.0 kW | ~19A | 30A circuit |
| 8× H100 SXM Server | ~6.0 kW | ~29A | 40A circuit |
| 8× B200 Server | ~10.0 kW | ~48A | 60A circuit |
| DGX H100 | ~10.2 kW | ~49A | 60A circuit |
PDU and Rack Power
Plan Power Distribution Units (PDUs) based on total rack load:
- Basic PDU: Power distribution only, no monitoring
- Metered PDU: Displays total power consumption
- Monitored PDU: Per-outlet monitoring, remote access
- Switched PDU: Remote power cycling capability
For AI deployments, use monitored or switched PDUs for visibility and management. Plan for N+1 redundancy on critical workloads.
Regional Electricity Rates
Electricity costs vary significantly by region, impacting total operating costs
| Region | Avg. Commercial Rate | Annual Cost (8× H100)* | Notes |
|---|---|---|---|
| Texas (ERCOT) | $0.065/kWh | $26,900 | Lowest US rates, renewable options |
| Virginia (Dominion) | $0.075/kWh | $31,000 | Major data center hub |
| Ohio/Indiana | $0.080/kWh | $33,100 | Growing DC market |
| US Average | $0.100/kWh | $41,400 | Baseline reference |
| California (PG&E) | $0.180/kWh | $74,500 | High rates, renewable mandates |
| New York (ConEd) | $0.200/kWh | $82,700 | Highest US metro rates |
| Iceland | $0.045/kWh | $18,600 | 100% renewable, cool climate |
| Norway | $0.050/kWh | $20,700 | Hydropower, free cooling |
| Germany | $0.250/kWh | $103,400 | Highest European rates |
*Annual cost for 8× H100 SXM at 85% utilization, PUE 1.5, 24/7 operation. Rates are approximate commercial/industrial averages and vary by contract.
Power Efficiency Best Practices
Optimize PUE
Reducing PUE from 1.6 to 1.3 saves 18.75% on total power costs. Invest in efficient cooling, use free cooling when possible, and maintain hot/cold aisle containment.
Right-Size Deployments
Match GPU selection to workload requirements. Using RTX 6000 Pro for inference instead of H100 saves 50% power per GPU while often meeting latency requirements.
Use Power Capping
NVIDIA GPUs support power capping via nvidia-smi. Reducing H100 from 700W to 500W often provides 80% of performance at 70% of power consumption.
Schedule Workloads
Run batch training during off-peak hours when electricity rates are lower. Many utilities offer time-of-use rates with 30-50% savings at night.
Monitor Continuously
Use DCIM tools and GPU monitoring (nvidia-smi, AMD SMI) to track actual power consumption. Identify idle GPUs and optimize utilization.
Consider Location
Electricity costs vary 4× between regions. For large deployments, colocating in Texas or Nordic countries can save millions over 5 years.
Frequently Asked Questions
How much power does an NVIDIA H100 GPU consume?
The NVIDIA H100 SXM consumes 700W TDP (Thermal Design Power), while the H100 PCIe variant uses 350W. Actual power consumption varies based on workload, typically ranging from 40-100% of TDP during AI inference and training tasks. An 8-GPU H100 SXM server can draw approximately 5.6kW from GPUs alone, plus additional power for CPUs, memory, storage, and networking.
What is PUE and why does it matter for AI infrastructure?
PUE (Power Usage Effectiveness) measures total facility power divided by IT equipment power. A PUE of 1.5 means for every 1kW of GPU power, 0.5kW goes to cooling and infrastructure. Modern efficient data centers achieve PUE of 1.1-1.3, while average facilities run 1.5-1.8. Lower PUE directly reduces operating costs - improving from 1.6 to 1.3 saves nearly 19% on electricity.
How do I calculate cooling requirements for GPUs?
Convert GPU power to BTU/hr by multiplying watts by 3.412. For cooling tons, divide BTU/hr by 12,000. An 8-GPU H100 server (5.6kW) produces approximately 19,100 BTU/hr, requiring about 1.6 tons of cooling capacity. For high-density deployments (40kW+ per rack), direct liquid cooling or immersion cooling becomes necessary as air cooling reaches its practical limits around 25-30kW per rack.
What electrical infrastructure do I need for AI GPUs?
AI GPU servers typically require 208V or 240V three-phase power. Per NEC code, circuits should be sized at 125% of continuous load (80% rule). An 8-GPU H100 server drawing 5.6kW needs approximately 27 amps at 208V or 23 amps at 240V, requiring a 35A or 30A circuit respectively. Plan for redundant power feeds (A+B) and monitored PDUs for production deployments.
How much does it cost to run AI GPUs annually?
Annual electricity costs depend on GPU power, utilization, PUE, and local rates. A single H100 SXM at 85% utilization with PUE 1.5 and $0.10/kWh costs approximately $7,800/year. An 8-GPU server would cost around $62,000/year in electricity alone. Over 5 years, electricity typically represents 30-40% of total cost of ownership for AI infrastructure.
What percentage of AI infrastructure TCO is electricity?
Electricity typically represents 30-40% of total cost of ownership (TCO) over a 5-year period for AI infrastructure. For high-density GPU deployments running 24/7, this can exceed 50% of TCO, making power efficiency and electricity rates critical factors in deployment decisions. Location selection (Texas vs. California can differ 3× in power costs) significantly impacts long-term economics.
Need Help Planning Your AI Infrastructure?
From hardware procurement to facility planning, SLYD provides end-to-end support for your AI deployment
GPU Hardware
Source H100, H200, B200, and MI300X systems from tier-1 OEMs including Dell, HPE, Supermicro, and Lenovo.
Explore Hardware →Equipment Financing
Flexible financing options to spread GPU infrastructure costs over 24-60 months with competitive rates.
Learn About Financing →Cloud Compute
On-demand GPU compute from our marketplace of verified providers. No hardware procurement delays.
Browse Compute →Consulting Services
Expert guidance on infrastructure design, facility planning, and deployment optimization.
Contact Us →