Best AI Models for NVIDIA A100 40GB PCIe (40.0GB)
40 GB positions this hardware in the professional tier for local AI. Most popular open-source models run comfortably, and even large 70B parameter models are accessible at lower quantization levels.
This memory amount is a sweet spot for enthusiasts and professionals. You can run 13B–30B models like DeepSeek R1 Distill at Q5 or Q6 quality with smooth token generation, and 7B models at near-lossless precision. The 70B class of models (Llama 3 70B, Qwen 72B) becomes possible at Q2–Q3 quantization, though with some quality trade-off. For day-to-day use with coding assistants, chat models, and reasoning tasks, this tier delivers an excellent experience.
Runs Well
- 7B–13B models at Q6–Q8 quality
- 14B–30B models at Q4–Q5 quality
- Small models (3B–7B) at FP16 precision
- Vision-language models at good quality
Challenging
- 70B models only at Q2–Q3 (noticeable quality loss)
- Large context windows with 30B+ models
What LLMs Can NVIDIA A100 40GB PCIe Run?
29 models · 1 excellent · 5 good
Showing compatibility for NVIDIA A100 40GB PCIe
| Model | Quant | VRAM | Speed | Context | Status | Grade |
|---|---|---|---|---|---|---|
Q4_K_M·188.2 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 5.4 GB | 188.2 t/s | 131K | EASY RUN | C32 |
Q4_K_M·349.7 t/s tok/s·41K ctx·EASY RUN | Q4_K_M | 2.9 GB | 349.7 t/s | 41K | EASY RUN | D29 |
Q4_K_M·187.5 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 5.4 GB | 187.5 t/s | 131K | EASY RUN | C32 |
Q8_0·205.9 t/s tok/s·4K ctx·EASY RUN | Q8_0 | 4.9 GB | 205.9 t/s | 4K | EASY RUN | C31 |
Q4_K_M·165.7 t/s tok/s·8K ctx·EASY RUN | Q4_K_M | 6.1 GB | 165.7 t/s | 8K | EASY RUN | C33 |
Q4_K_M·202.6 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 5.0 GB | 202.6 t/s | 131K | EASY RUN | C31 |
Q4_K_M·510.5 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 2.0 GB | 510.5 t/s | 131K | EASY RUN | D28 |
Q4_K_M·382.9 t/s tok/s·2K ctx·EASY RUN | Q4_K_M | 2.6 GB | 382.9 t/s | 2K | EASY RUN | D29 |
Q4_K_M·1000.7 t/s tok/s·2K ctx·EASY RUN | Q4_K_M | 1.0 GB | 1000.7 t/s | 2K | EASY RUN | D27 |
Q4_K_M·1531.4 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 0.7 GB | 1531.4 t/s | 131K | EASY RUN | D26 |
Q4_K_M·1531.4 t/s tok/s·33K ctx·EASY RUN | Q4_K_M | 0.7 GB | 1531.4 t/s | 33K | EASY RUN | D26 |
Q4_K_M·354.6 t/s tok/s·131K ctx·EASY RUN | Q4_K_M | 2.9 GB | 354.6 t/s | 131K | EASY RUN | D29 |
Q4_K_M·765.7 t/s tok/s·8K ctx·EASY RUN | Q4_K_M | 1.3 GB | 765.7 t/s | 8K | EASY RUN | D27 |
NVIDIA A100 40GB PCIe Specifications
- Brand
- NVIDIA
- Architecture
- Ampere
- VRAM
- 40.0 GB HBM2e
- Memory Bandwidth
- 1555.0 GB/s
- CUDA Cores
- 6,912
- Tensor Cores
- 432
- FP16 Performance
- 312.00 TFLOPS
- TDP
- 250W
- Release Date
- 2020-05-14
Get Started
GPUs to Consider Over NVIDIA A100 40GB PCIe
Similar GPUs and upgrades with more VRAM or higher bandwidth for AI
NVIDIA H100 SXM
NVIDIA · Hopper
NVIDIA A100 80GB SXM
NVIDIA · Ampere
NVIDIA H100 PCIe
NVIDIA · Hopper
AMD Instinct MI210
AMD · CDNA 2
NVIDIA RTX 6000 Ada Generation
NVIDIA · Ada Lovelace
AMD Radeon PRO W7900
AMD · RDNA 3
Frequently Asked Questions
- Can NVIDIA A100 40GB PCIe run Mixtral 8x7B Instruct v0.1?
Yes, the NVIDIA A100 40GB PCIe with 40 GB can run Mixtral 8x7B Instruct v0.1, Qwen3 32B, DeepSeek R1 Distill Qwen 32B, and 1169 other models. 19 models run at excellent quality, and 107 at good quality. Check the compatibility table above for the full list with VRAM usage and estimated speed.
- Is NVIDIA A100 40GB PCIe good for AI?
The NVIDIA A100 40GB PCIe has 40 GB of HBM2e, making it excellent for running local AI models. It supports 126 models at good quality or better. With 1555.0 GB/s memory bandwidth, it delivers fast token generation speeds. This is an enthusiast-grade GPU that handles most popular open-source LLMs.
- How many parameters can NVIDIA A100 40GB PCIe handle?
With 40 GB, the NVIDIA A100 40GB PCIe supports models from 3B to 30B parameters depending on quantization level. At Q4_K_M (the recommended sweet spot), you can fit roughly 66B parameters. This means 7B models at high quality (Q6/Q8) or 30B+ models at Q4.
- What quantization should I use on NVIDIA A100 40GB PCIe?
For the best balance of quality and speed on the NVIDIA A100 40GB PCIe, start with Q4_K_M — it preserves ~85% of the original model quality while keeping VRAM usage reasonable. With 24+ GB, you have the headroom to run 7B models at Q5_K_M or even Q6_K for noticeably better output quality. For larger 30B models, Q4_K_M remains the sweet spot.
- How fast is NVIDIA A100 40GB PCIe for AI inference?
With 1555.0 GB/s memory bandwidth, the NVIDIA A100 40GB PCIe achieves approximately 225 tokens/sec on a 7B model at Q4_K_M — that's very fast, well above conversational speed. A 14B model runs at ~112 tok/s. Token generation speed scales inversely with model size — smaller models are significantly faster.
tok/s = (1555 GB/s ÷ model GB) × efficiency
Smaller models = faster inference. Memory bandwidth is the main bottleneck for token generation speed.
Estimated speed on NVIDIA A100 40GB PCIe
~35 tok/s~51 tok/s~49 tok/s~49 tok/sReal-world results typically within ±20%. Speed depends on quantization kernel, batch size, and software stack.
- What's the best model for NVIDIA A100 40GB PCIe?
The top-rated models for the NVIDIA A100 40GB PCIe are Mixtral 8x7B Instruct v0.1, Qwen3 32B, DeepSeek R1 Distill Qwen 32B. The best choice depends on your use case: coding assistants benefit from code-tuned models, while general chat works well with instruction-tuned models like Llama or Qwen.