zai-org·GLM·GlmMoeDsaForCausalLM

GLM 5 FP8 — Hardware Requirements & GPU Compatibility

Chat

GLM 5 FP8 is the FP8 quantized release of Zhipu AI's 754 billion parameter flagship model, reducing memory requirements by storing weights in 8-bit floating point precision. This quantization roughly halves the VRAM needed compared to the full-precision version while preserving most of the model's capability across reasoning, coding, and multilingual tasks. It remains a demanding model to run locally, but FP8 quantization meaningfully lowers the hardware barrier for users with high-end multi-GPU setups.

4.3M downloads 143 likesMar 2026203K context

Specifications

Publisher
zai-org
Family
GLM
Parameters
753.9B
Architecture
GlmMoeDsaForCausalLM
Context Length
202,752 tokens
Vocabulary Size
154,880
Release Date
2026-03-11
License
MIT

Get Started

How Much VRAM Does GLM 5 FP8 Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
IQ2_XXS2.20211.6 GB
IQ2_M2.70258.7 GB
IQ3_XXS3.10296.4 GB
Q2_K3.40324.6 GB
Q3_K_S3.50334.1 GB
Q3_K_M3.90371.8 GB
Q4_04.00381.2 GB
IQ4_XS4.30409.4 GB
Q4_14.50428.3 GB
Q4_K_S4.50428.3 GB
IQ4_NL4.50428.3 GB
Q4_K_M4.80456.6 GB
Q5_K_S5.50522.5 GB
Q5_K_M5.70541.4 GB
Q6_K6.60626.2 GB
Q8_08.00758.1 GB

Which GPUs Can Run GLM 5 FP8?

Q4_K_M · 456.6 GB

GLM 5 FP8 (Q4_K_M) requires 456.6 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 594+ GB is recommended. Using the full 203K context window can add up to 384.7 GB, bringing total usage to 841.3 GB. No single GPU has enough memory — multi-GPU or cluster setups are needed.

Which Devices Can Run GLM 5 FP8?

Q4_K_M · 456.6 GB

2 devices with unified memory can run GLM 5 FP8, including NVIDIA DGX H100.

Decent

Enough memory, may be tight

Related Models

Frequently Asked Questions

How much VRAM does GLM 5 FP8 need?

GLM 5 FP8 requires 456.6 GB of VRAM at Q4_K_M, or 758.1 GB at Q8_0. Full 203K context adds up to 384.7 GB (841.3 GB total).

VRAM = Weights + KV Cache + Overhead

Weights = 753.9B × 4.8 bits ÷ 8 = 452.3 GB

KV Cache + Overhead 4.3 GB (at 2K context + ~0.3 GB framework)

KV Cache + Overhead 389 GB (at full 203K context)

VRAM usage by quantization

456.6 GB
841.3 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 5090 run GLM 5 FP8?

No — GLM 5 FP8 requires at least 211.6 GB at IQ2_XXS, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.

What's the best quantization for GLM 5 FP8?

For GLM 5 FP8, Q4_K_M (456.6 GB) offers the best balance of quality and VRAM usage. Q5_K_S (522.5 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XXS at 211.6 GB.

VRAM requirement by quantization

IQ2_XXS
211.6 GB
Q3_K_S
334.1 GB
Q4_1
428.3 GB
Q4_K_M
456.6 GB
Q5_K_S
522.5 GB
Q8_0
758.1 GB

★ Recommended — best balance of quality and VRAM usage.

Learn more about quantization →

Can I run GLM 5 FP8 on a Mac?

GLM 5 FP8 requires at least 211.6 GB at IQ2_XXS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run GLM 5 FP8 locally?

Yes — GLM 5 FP8 can run locally on consumer hardware. At Q4_K_M quantization it needs 456.6 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

What's the download size of GLM 5 FP8?

At Q4_K_M, the download is about 452.35 GB. The full-precision Q8_0 version is 753.91 GB. The smallest option (IQ2_XXS) is 207.33 GB.