GPT OSS 20B MXFP4 Q8 — Hardware Requirements & GPU Compatibility
ChatAn MLX-optimized MXFP4-Q8 quantized version of OpenAI's GPT-OSS 20B, converted by MLX Community for Apple Silicon Macs. This model uses a mixed-precision quantization scheme with MXFP4 weights and Q8 attention, designed to maximize performance on Apple's unified memory architecture while keeping the memory footprint manageable. GPT-OSS 20B is OpenAI's open-source entry at 20 billion parameters, and this MLX conversion makes it straightforward to run natively on M-series Macs without any CUDA dependency. Users with 32GB or more of unified memory should be able to run this model comfortably for general-purpose chat, writing, and reasoning tasks.
Specifications
- Publisher
- MLX Community
- Family
- GPT-OSS
- Parameters
- 20B
- Architecture
- GptOssForCausalLM
- Context Length
- 131,072 tokens
- Vocabulary Size
- 201,088
- Release Date
- 2025-08-29
- License
- Apache 2.0
Get Started
HuggingFace
How Much VRAM Does GPT OSS 20B MXFP4 Q8 Need?
Select a quantization to see compatible GPUs below.
| Quantization | Bits | VRAM | + Context | File Size | Quality |
|---|---|---|---|---|---|
| IQ2_XXS | 2.20 | 5.9 GB | 10.3 GB | 5.50 GB | Importance-weighted 2-bit, extreme compression — significant quality loss |
| IQ2_XS | 2.40 | 6.4 GB | 10.8 GB | 6.00 GB | Importance-weighted 2-bit, extra small |
| IQ2_S | 2.50 | 6.6 GB | 11.1 GB | 6.25 GB | Importance-weighted 2-bit, small |
| IQ2_M | 2.70 | 7.1 GB | 11.6 GB | 6.75 GB | Importance-weighted 2-bit, medium |
| IQ3_XXS | 3.10 | 8.1 GB | 12.6 GB | 7.75 GB | Importance-weighted 3-bit |
| IQ3_XS | 3.30 | 8.6 GB | 13.1 GB | 8.25 GB | Importance-weighted 3-bit, extra small |
| Q2_K | 3.40 | 8.9 GB | 13.3 GB | 8.50 GB | 2-bit quantization with K-quant improvements |
| Q3_K_S | 3.50 | 9.1 GB | 13.6 GB | 8.75 GB | 3-bit small quantization |
| IQ3_M | 3.60 | 9.4 GB | 13.8 GB | 9.00 GB | Importance-weighted 3-bit, medium |
| Q3_K_M | 3.90 | 10.1 GB | 14.6 GB | 9.75 GB | 3-bit medium quantization |
| Q4_0 | 4.00 | 10.4 GB | 14.8 GB | 10.00 GB | 4-bit legacy quantization |
| Q3_K_L | 4.10 | 10.6 GB | 15.1 GB | 10.25 GB | 3-bit large quantization |
| IQ4_XS | 4.30 | 11.1 GB | 15.6 GB | 10.75 GB | Importance-weighted 4-bit, compact |
| Q4_1 | 4.50 | 11.6 GB | 16.1 GB | 11.25 GB | 4-bit legacy quantization with offset |
| Q4_K_S | 4.50 | 11.6 GB | 16.1 GB | 11.25 GB | 4-bit small quantization |
| IQ4_NL | 4.50 | 11.6 GB | 16.1 GB | 11.25 GB | Importance-weighted 4-bit, non-linear |
| Q4_K_M | 4.80 | 12.4 GB | 16.8 GB | 12.00 GB | 4-bit medium quantization — most popular sweet spot |
| Q4_K_L | 4.90 | 12.6 GB | 17.1 GB | 12.25 GB | 4-bit large quantization |
| Q5_K_S | 5.50 | 14.1 GB | 18.6 GB | 13.75 GB | 5-bit small quantization |
| Q5_K_M | 5.70 | 14.6 GB | 19.1 GB | 14.25 GB | 5-bit medium quantization — good quality/size tradeoff |
| Q5_K_L | 5.80 | 14.9 GB | 19.3 GB | 14.50 GB | 5-bit large quantization |
| Q6_K | 6.60 | 16.9 GB | 21.3 GB | 16.50 GB | 6-bit quantization, very good quality |
| Q8_0 | 8.00 | 20.4 GB | 24.8 GB | 20.00 GB | 8-bit quantization, near-lossless |
Which GPUs Can Run GPT OSS 20B MXFP4 Q8?
Q4_K_M · 12.4 GBGPT OSS 20B MXFP4 Q8 (Q4_K_M) requires 12.4 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 17+ GB is recommended. Using the full 131K context window can add up to 4.5 GB, bringing total usage to 16.8 GB. 17 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti, NVIDIA GeForce RTX 5080.
Runs great
— Plenty of headroomDecent
— Enough VRAM, may be tightWhich Devices Can Run GPT OSS 20B MXFP4 Q8?
Q4_K_M · 12.4 GB27 devices with unified memory can run GPT OSS 20B MXFP4 Q8, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Mini M4 (16 GB).
Runs great
— Plenty of headroomRelated Models
Frequently Asked Questions
- How much VRAM does GPT OSS 20B MXFP4 Q8 need?
GPT OSS 20B MXFP4 Q8 requires 12.4 GB of VRAM at Q4_K_M, or 20.4 GB at Q8_0. Full 131K context adds up to 4.5 GB (16.8 GB total).
VRAM = Weights + KV Cache + Overhead
Weights = 20B × 4.8 bits ÷ 8 = 12 GB
KV Cache + Overhead ≈ 0.4 GB (at 2K context + ~0.3 GB framework)
KV Cache + Overhead ≈ 4.8 GB (at full 131K context)
VRAM usage by quantization
Q4_K_M12.4 GBQ4_K_M + full context16.8 GB- What's the best quantization for GPT OSS 20B MXFP4 Q8?
For GPT OSS 20B MXFP4 Q8, Q4_K_M (12.4 GB) offers the best balance of quality and VRAM usage. Q4_K_L (12.6 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XXS at 5.9 GB.
VRAM requirement by quantization
IQ2_XXS5.9 GB~53%Q2_K8.9 GB~75%Q3_K_L10.6 GB~86%Q4_K_M ★12.4 GB~89%Q4_K_L12.6 GB~90%Q8_020.4 GB~99%★ Recommended — best balance of quality and VRAM usage.
- Can I run GPT OSS 20B MXFP4 Q8 on a Mac?
GPT OSS 20B MXFP4 Q8 requires at least 5.9 GB at IQ2_XXS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.
- Can I run GPT OSS 20B MXFP4 Q8 locally?
Yes — GPT OSS 20B MXFP4 Q8 can run locally on consumer hardware. At Q4_K_M quantization it needs 12.4 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.
- How fast is GPT OSS 20B MXFP4 Q8?
At Q4_K_M, GPT OSS 20B MXFP4 Q8 can reach ~236 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~53 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.
tok/s = (bandwidth GB/s ÷ model GB) × efficiency
Example: AMD Instinct MI300X → 5300 ÷ 12.4 × 0.55 = ~236 tok/s
Estimated speed at Q4_K_M (12.4 GB)
AMD Instinct MI300X~236 tok/sNVIDIA GeForce RTX 4090~53 tok/sNVIDIA H100 SXM~176 tok/sAMD Instinct MI250X~146 tok/sReal-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.
- What's the download size of GPT OSS 20B MXFP4 Q8?
At Q4_K_M, the download is about 12.00 GB. The full-precision Q8_0 version is 20.00 GB. The smallest option (IQ2_XXS) is 5.50 GB.