QwQ 32B Preview Q4 K M GGUF — Hardware Requirements & GPU Compatibility
ChatReasoningSpecifications
- Publisher
- nanowell
- Family
- QwQ
- Parameters
- 32B
- License
- Apache 2.0
Get Started
HuggingFace
How Much VRAM Does QwQ 32B Preview Q4 K M GGUF Need?
Select a quantization to see compatible GPUs below.
| Quantization | Bits | VRAM | + Context | File Size | Quality |
|---|---|---|---|---|---|
| Q4_K_M | 4.80 | 21.1 GB | — | 19.20 GB | 4-bit medium quantization — most popular sweet spot |
Which GPUs Can Run QwQ 32B Preview Q4 K M GGUF?
Q4_K_M · 21.1 GBQwQ 32B Preview Q4 K M GGUF (Q4_K_M) requires 21.1 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 28+ GB is recommended. 5 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti.
Runs great
— Plenty of headroomDecent
— Enough VRAM, may be tightWhich Devices Can Run QwQ 32B Preview Q4 K M GGUF?
Q4_K_M · 21.1 GB21 devices with unified memory can run QwQ 32B Preview Q4 K M GGUF, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Mini M4 Pro (24 GB).
Runs great
— Plenty of headroomRelated Models
Frequently Asked Questions
- How much VRAM does QwQ 32B Preview Q4 K M GGUF need?
QwQ 32B Preview Q4 K M GGUF requires 21.1 GB of VRAM at Q4_K_M.
VRAM = Weights + KV Cache + Overhead
Weights = 32B × 4.8 bits ÷ 8 = 19.2 GB
KV Cache + Overhead ≈ 1.9 GB (at 2K context + ~0.3 GB framework)
VRAM usage by quantization
Q4_K_M21.1 GB- Can I run QwQ 32B Preview Q4 K M GGUF on a Mac?
QwQ 32B Preview Q4 K M GGUF requires at least 21.1 GB at Q4_K_M, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.
- Can I run QwQ 32B Preview Q4 K M GGUF locally?
Yes — QwQ 32B Preview Q4 K M GGUF can run locally on consumer hardware. At Q4_K_M quantization it needs 21.1 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.
- How fast is QwQ 32B Preview Q4 K M GGUF?
At Q4_K_M, QwQ 32B Preview Q4 K M GGUF can reach ~138 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~31 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.
tok/s = (bandwidth GB/s ÷ model GB) × efficiency
Example: AMD Instinct MI300X → 5300 ÷ 21.1 × 0.55 = ~138 tok/s
Estimated speed at Q4_K_M (21.1 GB)
AMD Instinct MI300X~138 tok/sNVIDIA GeForce RTX 4090~31 tok/sNVIDIA H100 SXM~103 tok/sAMD Instinct MI250X~85 tok/sReal-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.
- What's the download size of QwQ 32B Preview Q4 K M GGUF?
At Q4_K_M, the download is about 19.20 GB.
- Which GPUs can run QwQ 32B Preview Q4 K M GGUF?
5 consumer GPUs can run QwQ 32B Preview Q4 K M GGUF at Q4_K_M (21.1 GB). Top options include NVIDIA GeForce RTX 5090, AMD Radeon RX 7900 XTX, NVIDIA GeForce RTX 3090. 1 GPU have plenty of headroom for comfortable inference.
- Which devices can run QwQ 32B Preview Q4 K M GGUF?
21 devices with unified memory can run QwQ 32B Preview Q4 K M GGUF at Q4_K_M (21.1 GB), including Mac Mini M4 (32 GB), Mac Mini M4 Pro (24 GB), Mac Mini M4 Pro (48 GB), Mac Pro M2 Ultra (192 GB). Apple Silicon Macs use unified memory shared between CPU and GPU, making them well-suited for local LLM inference.