NVIDIAHopper

Best AI Models for NVIDIA H100 SXM (80.0GB)

VRAM:80.0 GB HBM3·Bandwidth:3352.0 GB/s·CUDA Cores:16,896·TDP:700W

With 80 GB of memory, this is a high-end configuration for local AI. You can comfortably run most open-source LLMs including large 70B parameter models at good quantization levels, making it one of the best setups for serious local AI work.

At this memory tier, nearly every popular open-source model is within reach. You can run Llama 3 70B at Q4_K_M or even Q5_K_M quantization with room to spare, handle coding assistants like DeepSeek Coder 33B at high quality, and easily run any 7B–30B model at full or near-full precision. Context windows remain generous even with larger models, so multi-turn conversations and long-document processing work smoothly.

Runs Well

  • 70B models (Llama 3 70B, Qwen 72B) at Q4–Q5
  • 30B models at Q6–Q8 quality
  • 7B–14B models at full FP16 precision
  • Vision models (LLaVA, CogVLM) without compromise

Challenging

  • Mixture-of-experts models like Mixtral 8x22B at higher quants
  • 120B+ models still require lower quantizations

What LLMs Can NVIDIA H100 SXM Run?

33 models · 3 good

Showing compatibility for NVIDIA H100 SXM

LLM models compatible with NVIDIA H100 SXM — ranked by performance
ModelVRAMGrade
Q4_K_M·412.7 t/s tok/s·131K ctx·EASY RUN
5.3 GBD29
Phi 414B
Q4_K_M·238.9 t/s tok/s·16K ctx·EASY RUN
9.1 GBC31
Q4_K_M·275.1 t/s tok/s·33K ctx·EASY RUN
7.9 GBC30
Qwen3 4B4B
Q4_K_M·753.9 t/s tok/s·41K ctx·EASY RUN
2.9 GBD27
Q4_K_M·442.8 t/s tok/s·33K ctx·EASY RUN
4.9 GBD28
Q4_K_M·405.7 t/s tok/s·131K ctx·EASY RUN
5.4 GBD29
Q4_K_M·1100.4 t/s tok/s·131K ctx·EASY RUN
2.0 GBD26
Q4_K_M·3301.2 t/s tok/s·131K ctx·EASY RUN
0.7 GBD26
Hermes 3 Llama 3.1 8B8.0B
Q4_K_M·404.2 t/s tok/s·131K ctx·EASY RUN
5.4 GBD29
Phi 3 Mini 4k Instruct3.8B
Q8_0·443.7 t/s tok/s·4K ctx·EASY RUN
4.9 GBD28
Phi 22.8B
Q4_K_M·825.3 t/s tok/s·2K ctx·EASY RUN
2.6 GBD27
Q4_K_M·3301.2 t/s tok/s·33K ctx·EASY RUN
0.7 GBD26
Q4_K_M·436.6 t/s tok/s·131K ctx·EASY RUN
5.0 GBD28
Q4_K_M·2157.2 t/s tok/s·2K ctx·EASY RUN
1.0 GBD26
Q4_K_M·357.2 t/s tok/s·8K ctx·EASY RUN
6.1 GBD29
Phi 4 Mini Instruct3.8B
Q4_K_M·764.5 t/s tok/s·131K ctx·EASY RUN
2.9 GBD27

NVIDIA H100 SXM Specifications

Brand
NVIDIA
Architecture
Hopper
VRAM
80.0 GB HBM3
Memory Bandwidth
3352.0 GB/s
CUDA Cores
16,896
Tensor Cores
528
FP16 Performance
989.40 TFLOPS
TDP
700W
Release Date
2022-09-01

Get Started

Ollama (Recommended)

$curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3:8b

LM Studio

LM Studio

Download LM Studio, search for a model, and run it with one click.

GPUs to Consider Over NVIDIA H100 SXM

Similar GPUs and upgrades with more VRAM or higher bandwidth for AI

Frequently Asked Questions

Can NVIDIA H100 SXM run Llama 3.1 70B Instruct?

Yes, the NVIDIA H100 SXM with 80 GB can run Llama 3.1 70B Instruct, Llama 3.3 70B Instruct, Qwen2.5 72B Instruct, and 1278 other models. 11 models run at excellent quality, and 76 at good quality. Check the compatibility table above for the full list with VRAM usage and estimated speed.

Is NVIDIA H100 SXM good for AI?

The NVIDIA H100 SXM has 80 GB of HBM3, making it excellent for running local AI models. It supports 87 models at good quality or better. With 3352.0 GB/s memory bandwidth, it delivers fast token generation speeds. This is an enthusiast-grade GPU that handles most popular open-source LLMs.

How many parameters can NVIDIA H100 SXM handle?

With 80 GB, the NVIDIA H100 SXM supports models from 3B to 70B+ parameters depending on quantization level. At Q4_K_M (the recommended sweet spot), you can fit roughly 133B parameters. This means 7B models at high quality (Q6/Q8) or 30B+ models at Q4.

What quantization should I use on NVIDIA H100 SXM?

For the best balance of quality and speed on the NVIDIA H100 SXM, start with Q4_K_M — it preserves ~85% of the original model quality while keeping VRAM usage reasonable. With 24+ GB, you have the headroom to run 7B models at Q5_K_M or even Q6_K for noticeably better output quality. For larger 30B models, Q4_K_M remains the sweet spot.

How fast is NVIDIA H100 SXM for AI inference?

With 3352.0 GB/s memory bandwidth, the NVIDIA H100 SXM achieves approximately 484 tokens/sec on a 7B model at Q4_K_M — that's very fast, well above conversational speed. A 14B model runs at ~242 tok/s. Token generation speed scales inversely with model size — smaller models are significantly faster.

tok/s = (3352 GB/s ÷ model GB) × efficiency

Smaller models = faster inference. Memory bandwidth is the main bottleneck for token generation speed.

Estimated speed on NVIDIA H100 SXM

Real-world results typically within ±20%. Speed depends on quantization kernel, batch size, and software stack.

Learn more about tok/s estimation →

What's the best model for NVIDIA H100 SXM?

The top-rated models for the NVIDIA H100 SXM are Llama 3.1 70B Instruct, Llama 3.3 70B Instruct, Qwen2.5 72B Instruct. The best choice depends on your use case: coding assistants benefit from code-tuned models, while general chat works well with instruction-tuned models like Llama or Qwen.