NVIDIAAda Lovelace

Best AI Models for NVIDIA L40 (48.0GB)

VRAM:48.0 GB GDDR6·Bandwidth:864.0 GB/s·CUDA Cores:18,176·TDP:300W

With 48 GB of memory, this is a high-end configuration for local AI. You can comfortably run most open-source LLMs including large 70B parameter models at good quantization levels, making it one of the best setups for serious local AI work.

At this memory tier, nearly every popular open-source model is within reach. You can run Llama 3 70B at Q4_K_M or even Q5_K_M quantization with room to spare, handle coding assistants like DeepSeek Coder 33B at high quality, and easily run any 7B–30B model at full or near-full precision. Context windows remain generous even with larger models, so multi-turn conversations and long-document processing work smoothly.

Runs Well

  • 70B models (Llama 3 70B, Qwen 72B) at Q4–Q5
  • 30B models at Q6–Q8 quality
  • 7B–14B models at full FP16 precision
  • Vision models (LLaVA, CogVLM) without compromise

Challenging

  • Mixture-of-experts models like Mixtral 8x22B at higher quants
  • 120B+ models still require lower quantizations

What LLMs Can NVIDIA L40 Run?

32 models · 1 good

Showing compatibility for NVIDIA L40

LLM models compatible with NVIDIA L40 — ranked by performance
ModelVRAMGrade
Q4_K_M·19.6 t/s tok/s·33K ctx·GOOD FIT
28.6 GBA76
Qwen3 32B32B
Q4_K_M·28.3 t/s tok/s·41K ctx·FAIR FIT
19.8 GBB56
Q4_K_M·27.4 t/s tok/s·131K ctx·FAIR FIT
20.5 GBB58
Q4_K_M·27.4 t/s tok/s·33K ctx·FAIR FIT
20.5 GBB58
Gemma 3 27B IT27.4B
Q4_K_M·31.0 t/s tok/s·131K ctx·FAIR FIT
18.1 GBB53
GPT OSS 20B21.5B
Q4_K_M·42.3 t/s tok/s·131K ctx·EASY RUN
13.3 GBC43
QwQ 32B32B
Q4_K_M·28.0 t/s tok/s·41K ctx·FAIR FIT
20.0 GBB57
Q4_K_M·26.2 t/s tok/s·4K ctx·FAIR FIT
21.4 GBB60
Q4_K_M·31.3 t/s tok/s·8K ctx·FAIR FIT
18.0 GBB52
Q4_K_M·37.1 t/s tok/s·33K ctx·FAIR FIT
15.1 GBB47
Phi 414B
Q4_K_M·61.6 t/s tok/s·16K ctx·EASY RUN
9.1 GBC35
Q4_K_M·70.9 t/s tok/s·33K ctx·EASY RUN
7.9 GBC34
Q4_K_M·112.5 t/s tok/s·33K ctx·EASY RUN
5.0 GBC30
Qwen3 8B8.2B
Q4_K_M·101.7 t/s tok/s·41K ctx·EASY RUN
5.5 GBC31
Q4_K_M·106.4 t/s tok/s·131K ctx·EASY RUN
5.3 GBC31
Q4_K_M·114.1 t/s tok/s·33K ctx·EASY RUN
4.9 GBC30

NVIDIA L40 Specifications

Brand
NVIDIA
Architecture
Ada Lovelace
VRAM
48.0 GB GDDR6
Memory Bandwidth
864.0 GB/s
CUDA Cores
18,176
Tensor Cores
568
FP16 Performance
362.10 TFLOPS
TDP
300W
Release Date
2022-10-13

Get Started

Ollama (Recommended)

$curl -fsSL https://ollama.com/install.sh | sh
$ollama run llama3:8b

LM Studio

LM Studio

Download LM Studio, search for a model, and run it with one click.

GPUs to Consider Over NVIDIA L40

Similar GPUs and upgrades with more VRAM or higher bandwidth for AI

Frequently Asked Questions

Can NVIDIA L40 run Mixtral 8x7B Instruct v0.1?

Yes, the NVIDIA L40 with 48 GB can run Mixtral 8x7B Instruct v0.1, Qwen3 32B, DeepSeek R1 Distill Qwen 32B, and 1221 other models. 12 models run at excellent quality, and 39 at good quality. Check the compatibility table above for the full list with VRAM usage and estimated speed.

Is NVIDIA L40 good for AI?

The NVIDIA L40 has 48 GB of GDDR6, making it excellent for running local AI models. It supports 51 models at good quality or better. With 864.0 GB/s memory bandwidth, it delivers fast token generation speeds. This is an enthusiast-grade GPU that handles most popular open-source LLMs.

How many parameters can NVIDIA L40 handle?

With 48 GB, the NVIDIA L40 supports models from 3B to 70B+ parameters depending on quantization level. At Q4_K_M (the recommended sweet spot), you can fit roughly 80B parameters. This means 7B models at high quality (Q6/Q8) or 30B+ models at Q4.

What quantization should I use on NVIDIA L40?

For the best balance of quality and speed on the NVIDIA L40, start with Q4_K_M — it preserves ~85% of the original model quality while keeping VRAM usage reasonable. With 24+ GB, you have the headroom to run 7B models at Q5_K_M or even Q6_K for noticeably better output quality. For larger 30B models, Q4_K_M remains the sweet spot.

How fast is NVIDIA L40 for AI inference?

With 864.0 GB/s memory bandwidth, the NVIDIA L40 achieves approximately 125 tokens/sec on a 7B model at Q4_K_M — that's very fast, well above conversational speed. A 14B model runs at ~62 tok/s. Token generation speed scales inversely with model size — smaller models are significantly faster.

tok/s = (864 GB/s ÷ model GB) × efficiency

Smaller models = faster inference. Memory bandwidth is the main bottleneck for token generation speed.

Estimated speed on NVIDIA L40

Real-world results typically within ±20%. Speed depends on quantization kernel, batch size, and software stack.

Learn more about tok/s estimation →

What's the best model for NVIDIA L40?

The top-rated models for the NVIDIA L40 are Mixtral 8x7B Instruct v0.1, Qwen3 32B, DeepSeek R1 Distill Qwen 32B. The best choice depends on your use case: coding assistants benefit from code-tuned models, while general chat works well with instruction-tuned models like Llama or Qwen.