skyblanket·GLM·GlmMoeDsaForCausalLM

GLM 5 Abliterated — Hardware Requirements & GPU Compatibility

Chat
263 downloads 9 likes203K context
Based on GLM 5

Specifications

Publisher
skyblanket
Family
GLM
Parameters
753.9B
Architecture
GlmMoeDsaForCausalLM
Context Length
202,752 tokens
Vocabulary Size
154,880
Release Date
2026-02-22
License
Apache 2.0

Get Started

How Much VRAM Does GLM 5 Abliterated Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
Q2_K3.40324.6 GB
Q3_K_S3.50334.0 GB
Q3_K_M3.90371.7 GB
Q4_04.00381.2 GB
Q4_K_M4.80456.5 GB
Q5_K_M5.70541.4 GB
Q6_K6.60626.2 GB
Q8_08.00758.1 GB

Which GPUs Can Run GLM 5 Abliterated?

Q4_K_M · 456.5 GB

GLM 5 Abliterated (Q4_K_M) requires 456.5 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 594+ GB is recommended. Using the full 203K context window can add up to 384.7 GB, bringing total usage to 841.3 GB. No single GPU has enough memory — multi-GPU or cluster setups are needed.

Which Devices Can Run GLM 5 Abliterated?

Q4_K_M · 456.5 GB

2 devices with unified memory can run GLM 5 Abliterated, including NVIDIA DGX H100.

Decent

Enough memory, may be tight

Related Models

Frequently Asked Questions

How much VRAM does GLM 5 Abliterated need?

GLM 5 Abliterated requires 456.5 GB of VRAM at Q4_K_M, or 758.1 GB at Q8_0. Full 203K context adds up to 384.7 GB (841.3 GB total).

VRAM = Weights + KV Cache + Overhead

Weights = 753.9B × 4.8 bits ÷ 8 = 452.3 GB

KV Cache + Overhead 4.2 GB (at 2K context + ~0.3 GB framework)

KV Cache + Overhead 389 GB (at full 203K context)

VRAM usage by quantization

456.5 GB
841.3 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 5090 run GLM 5 Abliterated?

No — GLM 5 Abliterated requires at least 211.5 GB at IQ2_XXS, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.

What's the best quantization for GLM 5 Abliterated?

For GLM 5 Abliterated, Q4_K_M (456.5 GB) offers the best balance of quality and VRAM usage. Q5_K_S (522.5 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XXS at 211.5 GB.

VRAM requirement by quantization

IQ2_XXS
211.5 GB
Q3_K_S
334.0 GB
Q4_1
428.3 GB
Q4_K_M
456.5 GB
Q5_K_S
522.5 GB
Q8_0
758.1 GB

★ Recommended — best balance of quality and VRAM usage.

Learn more about quantization →

Can I run GLM 5 Abliterated on a Mac?

GLM 5 Abliterated requires at least 211.5 GB at IQ2_XXS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run GLM 5 Abliterated locally?

Yes — GLM 5 Abliterated can run locally on consumer hardware. At Q4_K_M quantization it needs 456.5 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

What's the download size of GLM 5 Abliterated?

At Q4_K_M, the download is about 452.32 GB. The full-precision Q8_0 version is 753.86 GB. The smallest option (IQ2_XXS) is 207.31 GB.

Which GPUs can run GLM 5 Abliterated?

No single consumer GPU has enough VRAM to run GLM 5 Abliterated at Q4_K_M (456.5 GB). Multi-GPU or professional hardware is required.

Which devices can run GLM 5 Abliterated?

2 devices with unified memory can run GLM 5 Abliterated at Q4_K_M (456.5 GB), including NVIDIA DGX A100 640GB, NVIDIA DGX H100. Apple Silicon Macs use unified memory shared between CPU and GPU, making them well-suited for local LLM inference.