Specifications and models | NVIDIA A100 80GB PCIe | ||||||
FP64 | 9.7 TFLOPS | ||||||
FP64 Tensor Core | 19.5 TFLOPS | ||||||
FP32 | 19.5 TFLOPS | ||||||
Tensor Float 32 (TF32) | 156 TFLOPS | 312 TFLOPS* | ||||||
BFLOAT16 Tensor Core | 312 TFLOPS | 624 TFLOPS* | ||||||
FP16 Tensor Core | 312 TFLOPS | 624 TFLOPS* | ||||||
INT8 Tensor Core | 624 TOPS | 1248 TOPS* | ||||||
GPU memory | 80GB HBM2 | ||||||
GPU memory bandwidth | 1935 GB/s | ||||||
Maximum Thermal Design Power (TDP) | 300W | ||||||
Multi-instance GPU | 7 MIG @ 5GB maximum | ||||||
Form Factor | PCIe Two-slot air-cooled or single-slot liquid-cooled | ||||||
interconnection | NVIDIA®NVLink®Bridge 2 GPU:600 GB/s ** PCIe 4.0:64 GB/s | ||||||
Server Options | Partner and NVIDIA-certified systems with 1 to 8 GPUs™ | ||||||
Warranty Service | One year warranty |