Take an order-of-magnitude leap in accelerated computing.
The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability, and security for every workload. With NVIDIA® NVLink® Switch System, up to 256 H100 GPUs can be connected to accelerate exascale workloads, while the dedicated Transformer Engine supports trillion-parameter language models. H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30X over the previous generation.
See product specifications.
Form Factor | H100 SXM | H100 PCIe |
FP64 |
34 teraFLOPS |
26 teraFLOPS |
FP64 Tensor Core |
67 teraFLOPS |
51 teraFLOPS |
FP32 |
67 teraFLOPS |
51 teraFLOPS |
TF32 Tensor Core |
989 teraFLOPS* |
756teraFLOPS* |
BFLOAT16 Tensor Core |
1,979 teraFLOPS* |
1,513 teraFLOPS* |
FP16 Tensor Core |
1,979 teraFLOPS* |
1,513 teraFLOPS* |
FP8 Tensor Core |
3,958 teraFLOPS* |
3,026 teraFLOPS* |
INT8 Tensor Core |
3,958 TOPS* |
3,026 TOPS* |
GPU memory |
80GB |
80GB |
GPU memory bandwidth |
3.35TB/s |
2TB/s |
Decoders |
7 NVDEC 7 JPEG |
7 NVDEC 7 JPEG |
Max thermal design power (TDP) |
Up to 700W (configurable) |
300-350W (configurable) |
Multi-Instance GPUs |
Up to 7 MIGS @ 10GB each |
Form factor |
SXM |
PCIe Dual-slot air-cooled |
Interconnect |
NVLink: 900GB/s PCIe Gen5: 128GB/s |
NVLINK: 600GB/s PCIe Gen5: 128GB/s |
Server options |
NVIDIA HGX™ H100 Partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs NVIDIA DGX™ H100 with 8 GPUs |
Partner and NVIDIA-Certified Systems with 1–8 GPUs |
NVIDIA AI Enterprise |
Add-on |
Included |