Moonshot AI·Kimi K2·DeepseekV3ForCausalLM

Kimi K2 Instruct 0905 — Hardware Requirements & GPU Compatibility

Chat
29.5K downloads 690 likes262K context

Specifications

Publisher
Moonshot AI
Family
Kimi K2
Parameters
1026.5B
Architecture
DeepseekV3ForCausalLM
Context Length
262,144 tokens
Vocabulary Size
163,840
Release Date
2026-01-30
License
Other

Get Started

How Much VRAM Does Kimi K2 Instruct 0905 Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
Q2_K3.40440.1 GB
Q3_K_S3.50453.0 GB
Q3_K_M3.90504.3 GB
Q4_04.00517.1 GB
Q4_K_M4.80619.8 GB
Q5_K_M5.70735.2 GB
Q6_K6.60850.7 GB
Q8_08.001030.3 GB

Which GPUs Can Run Kimi K2 Instruct 0905?

Q4_K_M · 619.8 GB

Kimi K2 Instruct 0905 (Q4_K_M) requires 619.8 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 806+ GB is recommended. Using the full 262K context window can add up to 454.9 GB, bringing total usage to 1074.7 GB. No single GPU has enough memory — multi-GPU or cluster setups are needed.

Which Devices Can Run Kimi K2 Instruct 0905?

Q4_K_M · 619.8 GB

2 devices with unified memory can run Kimi K2 Instruct 0905, including NVIDIA DGX H100.

Decent

Enough memory, may be tight

Related Models

Frequently Asked Questions

How much VRAM does Kimi K2 Instruct 0905 need?

Kimi K2 Instruct 0905 requires 619.8 GB of VRAM at Q4_K_M, or 1030.3 GB at Q8_0. Full 262K context adds up to 454.9 GB (1074.7 GB total).

VRAM = Weights + KV Cache + Overhead

Weights = 1026.5B × 4.8 bits ÷ 8 = 615.9 GB

KV Cache + Overhead 3.9 GB (at 2K context + ~0.3 GB framework)

KV Cache + Overhead 458.8 GB (at full 262K context)

VRAM usage by quantization

619.8 GB
1074.7 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 5090 run Kimi K2 Instruct 0905?

No — Kimi K2 Instruct 0905 requires at least 286.2 GB at IQ2_XXS, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.

What's the best quantization for Kimi K2 Instruct 0905?

For Kimi K2 Instruct 0905, Q4_K_M (619.8 GB) offers the best balance of quality and VRAM usage. Q5_K_S (709.6 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XXS at 286.2 GB.

VRAM requirement by quantization

IQ2_XXS
286.2 GB
Q3_K_S
453.0 GB
Q4_1
581.3 GB
Q4_K_M
619.8 GB
Q5_K_S
709.6 GB
Q8_0
1030.3 GB

★ Recommended — best balance of quality and VRAM usage.

Learn more about quantization →

Can I run Kimi K2 Instruct 0905 on a Mac?

Kimi K2 Instruct 0905 requires at least 286.2 GB at IQ2_XXS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Kimi K2 Instruct 0905 locally?

Yes — Kimi K2 Instruct 0905 can run locally on consumer hardware. At Q4_K_M quantization it needs 619.8 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

What's the download size of Kimi K2 Instruct 0905?

At Q4_K_M, the download is about 615.88 GB. The full-precision Q8_0 version is 1026.47 GB. The smallest option (IQ2_XXS) is 282.28 GB.

Which GPUs can run Kimi K2 Instruct 0905?

No single consumer GPU has enough VRAM to run Kimi K2 Instruct 0905 at Q4_K_M (619.8 GB). Multi-GPU or professional hardware is required.

Which devices can run Kimi K2 Instruct 0905?

2 devices with unified memory can run Kimi K2 Instruct 0905 at Q4_K_M (619.8 GB), including NVIDIA DGX A100 640GB, NVIDIA DGX H100. Apple Silicon Macs use unified memory shared between CPU and GPU, making them well-suited for local LLM inference.