GLM 4.7 — Hardware Requirements & GPU Compatibility
ChatGLM 4.7 is an earlier generation of Zhipu AI's GLM foundation model series, featuring a mixture-of-experts architecture with approximately 358 billion total parameters. It delivers strong performance on reasoning, language understanding, and bilingual Chinese-English tasks while being significantly more manageable to run locally than its GLM 5 successor. For users with multi-GPU setups, GLM 4.7 offers a practical balance between capability and hardware requirements within the GLM model family.
Specifications
- Publisher
- zai-org
- Family
- GLM
- Parameters
- 358.3B
- Architecture
- Glm4MoeForCausalLM
- Context Length
- 202,752 tokens
- Vocabulary Size
- 151,552
- Release Date
- 2026-01-29
- License
- MIT
Get Started
HuggingFace
How Much VRAM Does GLM 4.7 Need?
Select a quantization to see compatible GPUs below.
| Quantization | Bits | VRAM | + Context | File Size | Quality |
|---|---|---|---|---|---|
| IQ2_XXS | 2.20 | 99.2 GB | 130.7 GB | 98.54 GB | Importance-weighted 2-bit, extreme compression — significant quality loss |
| IQ2_M | 2.70 | 121.6 GB | 153.1 GB | 120.94 GB | Importance-weighted 2-bit, medium |
| IQ3_XXS | 3.10 | 139.5 GB | 171.0 GB | 138.86 GB | Importance-weighted 3-bit |
| Q2_K | 3.40 | 152.9 GB | 184.4 GB | 152.29 GB | 2-bit quantization with K-quant improvements |
| Q3_K_S | 3.50 | 157.4 GB | 188.9 GB | 156.77 GB | 3-bit small quantization |
| Q3_K_M | 3.90 | 175.3 GB | 206.8 GB | 174.69 GB | 3-bit medium quantization |
| Q4_0 | 4.00 | 179.8 GB | 211.3 GB | 179.17 GB | 4-bit legacy quantization |
| IQ4_XS | 4.30 | 193.2 GB | 224.7 GB | 192.61 GB | Importance-weighted 4-bit, compact |
| Q4_1 | 4.50 | 202.2 GB | 233.7 GB | 201.57 GB | 4-bit legacy quantization with offset |
| Q4_K_S | 4.50 | 202.2 GB | 233.7 GB | 201.57 GB | 4-bit small quantization |
| IQ4_NL | 4.50 | 202.2 GB | 233.7 GB | 201.57 GB | Importance-weighted 4-bit, non-linear |
| Q4_K_M | 4.80 | 215.6 GB | 247.1 GB | 215.00 GB | 4-bit medium quantization — most popular sweet spot |
| Q5_K_S | 5.50 | 247.0 GB | 278.5 GB | 246.36 GB | 5-bit small quantization |
| Q5_K_M | 5.70 | 255.9 GB | 287.4 GB | 255.32 GB | 5-bit medium quantization — good quality/size tradeoff |
| Q6_K | 6.60 | 296.3 GB | 327.8 GB | 295.63 GB | 6-bit quantization, very good quality |
| Q8_0 | 8.00 | 359.0 GB | 390.5 GB | 358.34 GB | 8-bit quantization, near-lossless |
Which GPUs Can Run GLM 4.7?
Q4_K_M · 215.6 GBGLM 4.7 (Q4_K_M) requires 215.6 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 281+ GB is recommended. Using the full 203K context window can add up to 31.5 GB, bringing total usage to 247.1 GB. No single GPU has enough memory — multi-GPU or cluster setups are needed.
Which Devices Can Run GLM 4.7?
Q4_K_M · 215.6 GB2 devices with unified memory can run GLM 4.7, including NVIDIA DGX H100, NVIDIA DGX A100 640GB.
Runs great
— Plenty of headroomRelated Models
Frequently Asked Questions
- How much VRAM does GLM 4.7 need?
GLM 4.7 requires 215.6 GB of VRAM at Q4_K_M, or 359.0 GB at Q8_0. Full 203K context adds up to 31.5 GB (247.1 GB total).
VRAM = Weights + KV Cache + Overhead
Weights = 358.3B × 4.8 bits ÷ 8 = 215 GB
KV Cache + Overhead ≈ 0.6 GB (at 2K context + ~0.3 GB framework)
KV Cache + Overhead ≈ 32.1 GB (at full 203K context)
VRAM usage by quantization
Q4_K_M215.6 GBQ4_K_M + full context247.1 GB- Can NVIDIA GeForce RTX 5090 run GLM 4.7?
No — GLM 4.7 requires at least 99.2 GB at IQ2_XXS, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.
- What's the best quantization for GLM 4.7?
For GLM 4.7, Q4_K_M (215.6 GB) offers the best balance of quality and VRAM usage. Q5_K_S (247.0 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XXS at 99.2 GB.
VRAM requirement by quantization
IQ2_XXS99.2 GB~53%Q3_K_S157.4 GB~77%Q4_1202.2 GB~88%Q4_K_M ★215.6 GB~89%Q5_K_S247.0 GB~92%Q8_0359.0 GB~99%★ Recommended — best balance of quality and VRAM usage.
- Can I run GLM 4.7 on a Mac?
GLM 4.7 requires at least 99.2 GB at IQ2_XXS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.
- Can I run GLM 4.7 locally?
Yes — GLM 4.7 can run locally on consumer hardware. At Q4_K_M quantization it needs 215.6 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.
- What's the download size of GLM 4.7?
At Q4_K_M, the download is about 215.00 GB. The full-precision Q8_0 version is 358.34 GB. The smallest option (IQ2_XXS) is 98.54 GB.