Mistral Small 24B Instruct 2501 GGUF — Hardware Requirements & GPU Compatibility
ChatSpecifications
- Publisher
- MaziyarPanahi
- Family
- Mistral
- Parameters
- 24B
- License
- Apache 2.0
Get Started
How Much VRAM Does Mistral Small 24B Instruct 2501 GGUF Need?
Select a quantization to see compatible GPUs below.
| Quantization | Bits | VRAM | + Context | File Size | Quality |
|---|---|---|---|---|---|
| Q2_K | 3.40 | 11.2 GB | — | 10.20 GB | 2-bit quantization with K-quant improvements |
| Q3_K_S | 3.50 | 11.6 GB | — | 10.50 GB | 3-bit small quantization |
| Q3_K_M | 3.90 | 12.9 GB | — | 11.70 GB | 3-bit medium quantization |
| Q3_K_L | 4.10 | 13.5 GB | — | 12.30 GB | 3-bit large quantization |
| Q4_K_S | 4.50 | 14.8 GB | — | 13.50 GB | 4-bit small quantization |
| Q4_K_M | 4.80 | 15.8 GB | — | 14.40 GB | 4-bit medium quantization — most popular sweet spot |
| Q5_K_S | 5.50 | 18.1 GB | — | 16.50 GB | 5-bit small quantization |
| Q5_K_M | 5.70 | 18.8 GB | — | 17.10 GB | 5-bit medium quantization — good quality/size tradeoff |
| Q6_K | 6.60 | 21.8 GB | — | 19.80 GB | 6-bit quantization, very good quality |
| Q8_0 | 8.00 | 26.4 GB | — | 24.00 GB | 8-bit quantization, near-lossless |
Which GPUs Can Run Mistral Small 24B Instruct 2501 GGUF?
Q4_K_M · 15.8 GBMistral Small 24B Instruct 2501 GGUF (Q4_K_M) requires 15.8 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 21+ GB is recommended. 17 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti, NVIDIA GeForce RTX 5080.
Runs great
— Plenty of headroomDecent
— Enough VRAM, may be tightWhich Devices Can Run Mistral Small 24B Instruct 2501 GGUF?
Q4_K_M · 15.8 GB27 devices with unified memory can run Mistral Small 24B Instruct 2501 GGUF, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Mini M4 (16 GB).
Runs great
— Plenty of headroomRelated Models
Frequently Asked Questions
- How much VRAM does Mistral Small 24B Instruct 2501 GGUF need?
Mistral Small 24B Instruct 2501 GGUF requires 15.8 GB of VRAM at Q4_K_M, or 26.4 GB at Q8_0.
VRAM = Weights + KV Cache + Overhead
Weights = 24B × 4.8 bits ÷ 8 = 14.4 GB
KV Cache + Overhead ≈ 1.4 GB (at 2K context + ~0.3 GB framework)
VRAM usage by quantization
Q4_K_M15.8 GB- Can NVIDIA GeForce RTX 4090 run Mistral Small 24B Instruct 2501 GGUF?
Yes, at Q6_K (21.8 GB) or lower. Higher quantizations like Q8_0 (26.4 GB) exceed the NVIDIA GeForce RTX 4090's 24 GB.
- What's the best quantization for Mistral Small 24B Instruct 2501 GGUF?
For Mistral Small 24B Instruct 2501 GGUF, Q4_K_M (15.8 GB) offers the best balance of quality and VRAM usage. Q5_K_S (18.1 GB) provides better quality if you have the VRAM. The smallest option is Q2_K at 11.2 GB.
VRAM requirement by quantization
Q2_K11.2 GB~75%Q3_K_M12.9 GB~83%Q4_K_M ★15.8 GB~89%Q5_K_S18.1 GB~92%Q5_K_M18.8 GB~92%Q8_026.4 GB~99%★ Recommended — best balance of quality and VRAM usage.
- Can I run Mistral Small 24B Instruct 2501 GGUF on a Mac?
Mistral Small 24B Instruct 2501 GGUF requires at least 11.2 GB at Q2_K, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.
- Can I run Mistral Small 24B Instruct 2501 GGUF locally?
Yes — Mistral Small 24B Instruct 2501 GGUF can run locally on consumer hardware. At Q4_K_M quantization it needs 15.8 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.
- How fast is Mistral Small 24B Instruct 2501 GGUF?
At Q4_K_M, Mistral Small 24B Instruct 2501 GGUF can reach ~184 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~41 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.
tok/s = (bandwidth GB/s ÷ model GB) × efficiency
Example: AMD Instinct MI300X → 5300 ÷ 15.8 × 0.55 = ~184 tok/s
Estimated speed at Q4_K_M (15.8 GB)
AMD Instinct MI300X~184 tok/sNVIDIA GeForce RTX 4090~41 tok/sNVIDIA H100 SXM~138 tok/sAMD Instinct MI250X~114 tok/sReal-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.
- What's the download size of Mistral Small 24B Instruct 2501 GGUF?
At Q4_K_M, the download is about 14.40 GB. The full-precision Q8_0 version is 24.00 GB. The smallest option (Q2_K) is 10.20 GB.