NVIDIAHopper

Best AI Models for NVIDIA H100 PCIe (80.0GB)

VRAM:80.0 GB HBM2e·Bandwidth:2039.0 GB/s·CUDA Cores:14,592·TDP:350W

With 80 GB of memory, this is a high-end configuration for local AI. You can comfortably run most open-source LLMs including large 70B parameter models at good quantization levels, making it one of the best setups for serious local AI work.

At this memory tier, nearly every popular open-source model is within reach. You can run Llama 3 70B at Q4_K_M or even Q5_K_M quantization with room to spare, handle coding assistants like DeepSeek Coder 33B at high quality, and easily run any 7B–30B model at full or near-full precision. Context windows remain generous even with larger models, so multi-turn conversations and long-document processing work smoothly.

Runs Well

  • 70B models (Llama 3 70B, Qwen 72B) at Q4–Q5
  • 30B models at Q6–Q8 quality
  • 7B–14B models at full FP16 precision
  • Vision models (LLaVA, CogVLM) without compromise

Challenging

  • Mixture-of-experts models like Mixtral 8x22B at higher quants
  • 120B+ models still require lower quantizations

What LLMs Can NVIDIA H100 PCIe Run?

33 models · 3 good

Showing compatibility for NVIDIA H100 PCIe

LLM models compatible with NVIDIA H100 PCIe — ranked by performance
ModelVRAMGrade
Q4_K_M·28.5 t/s tok/s·131K ctx·GOOD FIT
46.6 GBA74
Q4_K_M·28.7 t/s tok/s·131K ctx·GOOD FIT
46.2 GBA74
Q4_K_M·29.7 t/s tok/s·33K ctx·GOOD FIT
44.6 GBA71
Q4_K_M·46.4 t/s tok/s·33K ctx·FAIR FIT
28.6 GBB51
Qwen3 32B32B
Q4_K_M·66.8 t/s tok/s·41K ctx·EASY RUN
19.8 GBC40
Q4_K_M·64.7 t/s tok/s·131K ctx·EASY RUN
20.5 GBC41
Q4_K_M·64.7 t/s tok/s·33K ctx·EASY RUN
20.5 GBC41
Gemma 3 27B IT27.4B
Q4_K_M·73.2 t/s tok/s·131K ctx·EASY RUN
18.1 GBC38
GPT OSS 20B21.5B
Q4_K_M·99.8 t/s tok/s·131K ctx·EASY RUN
13.3 GBC34
Q4_K_M·73.8 t/s tok/s·8K ctx·EASY RUN
18.0 GBC37
QwQ 32B32B
Q4_K_M·66.1 t/s tok/s·41K ctx·EASY RUN
20.0 GBC40
Q4_K_M·61.8 t/s tok/s·4K ctx·EASY RUN
21.4 GBC42
GPT OSS 120B120.4B
Q4_K_M·18.2 t/s tok/s·131K ctx·FAIR FIT
72.7 GBB48
Q4_K_M·265.6 t/s tok/s·33K ctx·EASY RUN
5.0 GBD28
Q4_K_M·87.7 t/s tok/s·33K ctx·EASY RUN
15.1 GBC35
Qwen3 8B8.2B
Q4_K_M·240.1 t/s tok/s·41K ctx·EASY RUN
5.5 GBD29

NVIDIA H100 PCIe Specifications

Brand
NVIDIA
Architecture
Hopper
VRAM
80.0 GB HBM2e
Memory Bandwidth
2039.0 GB/s
CUDA Cores
14,592
Tensor Cores
456
FP16 Performance
756.50 TFLOPS
TDP
350W
Release Date
2022-09-01

Get Started

Ollama (Recommended)

$curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3:8b

LM Studio

LM Studio

Download LM Studio, search for a model, and run it with one click.

GPUs to Consider Over NVIDIA H100 PCIe

Similar GPUs and upgrades with more VRAM or higher bandwidth for AI

Frequently Asked Questions

Can NVIDIA H100 PCIe run Llama 3.1 70B Instruct?

Yes, the NVIDIA H100 PCIe with 80 GB can run Llama 3.1 70B Instruct, Llama 3.3 70B Instruct, Qwen2.5 72B Instruct, and 1278 other models. 11 models run at excellent quality, and 76 at good quality. Check the compatibility table above for the full list with VRAM usage and estimated speed.

Is NVIDIA H100 PCIe good for AI?

The NVIDIA H100 PCIe has 80 GB of HBM2e, making it excellent for running local AI models. It supports 87 models at good quality or better. With 2039.0 GB/s memory bandwidth, it delivers fast token generation speeds. This is an enthusiast-grade GPU that handles most popular open-source LLMs.

How many parameters can NVIDIA H100 PCIe handle?

With 80 GB, the NVIDIA H100 PCIe supports models from 3B to 70B+ parameters depending on quantization level. At Q4_K_M (the recommended sweet spot), you can fit roughly 133B parameters. This means 7B models at high quality (Q6/Q8) or 30B+ models at Q4.

What quantization should I use on NVIDIA H100 PCIe?

For the best balance of quality and speed on the NVIDIA H100 PCIe, start with Q4_K_M — it preserves ~85% of the original model quality while keeping VRAM usage reasonable. With 24+ GB, you have the headroom to run 7B models at Q5_K_M or even Q6_K for noticeably better output quality. For larger 30B models, Q4_K_M remains the sweet spot.

How fast is NVIDIA H100 PCIe for AI inference?

With 2039.0 GB/s memory bandwidth, the NVIDIA H100 PCIe achieves approximately 295 tokens/sec on a 7B model at Q4_K_M — that's very fast, well above conversational speed. A 14B model runs at ~147 tok/s. Token generation speed scales inversely with model size — smaller models are significantly faster.

tok/s = (2039 GB/s ÷ model GB) × efficiency

Smaller models = faster inference. Memory bandwidth is the main bottleneck for token generation speed.

Estimated speed on NVIDIA H100 PCIe

Real-world results typically within ±20%. Speed depends on quantization kernel, batch size, and software stack.

Learn more about tok/s estimation →

What's the best model for NVIDIA H100 PCIe?

The top-rated models for the NVIDIA H100 PCIe are Llama 3.1 70B Instruct, Llama 3.3 70B Instruct, Qwen2.5 72B Instruct. The best choice depends on your use case: coding assistants benefit from code-tuned models, while general chat works well with instruction-tuned models like Llama or Qwen.