Alibaba·Qwen·Qwen3MoeForCausalLM

Qwen3 235B A22B Thinking 2507 — Hardware Requirements & GPU Compatibility

Chat

Qwen3 235B A22B Thinking 2507 is the reasoning and chain-of-thought variant of Alibaba's largest Qwen3 mixture-of-experts model, updated in July 2025. With 235 billion total parameters and about 22 billion active per forward pass, it represents the pinnacle of Qwen3's reasoning capabilities. This model excels at complex multi-step problems, mathematical reasoning, code analysis, and tasks requiring deep logical thinking. It demands serious hardware to run locally, but for users with multi-GPU setups, it offers reasoning performance that rivals the best proprietary models while keeping all computation on your own machines.

53.5K downloads 399 likesAug 2025262K context

Specifications

Publisher
Alibaba
Family
Qwen
Parameters
235B
Architecture
Qwen3MoeForCausalLM
Context Length
262,144 tokens
Vocabulary Size
151,936
Release Date
2025-08-17
License
Apache 2.0

Get Started

How Much VRAM Does Qwen3 235B A22B Thinking 2507 Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
BF1616.00470.5 GB

Which GPUs Can Run Qwen3 235B A22B Thinking 2507?

BF16 · 470.5 GB

Qwen3 235B A22B Thinking 2507 (BF16) requires 470.5 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 612+ GB is recommended. Using the full 262K context window can add up to 25.0 GB, bringing total usage to 495.5 GB. No single GPU has enough memory — multi-GPU or cluster setups are needed.

Which Devices Can Run Qwen3 235B A22B Thinking 2507?

BF16 · 470.5 GB

2 devices with unified memory can run Qwen3 235B A22B Thinking 2507, including NVIDIA DGX H100.

Decent

Enough memory, may be tight

Related Models

Frequently Asked Questions

How much VRAM does Qwen3 235B A22B Thinking 2507 need?

Qwen3 235B A22B Thinking 2507 requires 470.5 GB of VRAM at BF16. Full 262K context adds up to 25.0 GB (495.5 GB total).

VRAM = Weights + KV Cache + Overhead

Weights = 235B × 16 bits ÷ 8 = 470 GB

KV Cache + Overhead 0.5 GB (at 2K context + ~0.3 GB framework)

KV Cache + Overhead 25.5 GB (at full 262K context)

VRAM usage by quantization

470.5 GB
495.5 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 5090 run Qwen3 235B A22B Thinking 2507?

No — Qwen3 235B A22B Thinking 2507 requires at least 470.5 GB at BF16, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.

Can I run Qwen3 235B A22B Thinking 2507 on a Mac?

Qwen3 235B A22B Thinking 2507 requires at least 470.5 GB at BF16, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Qwen3 235B A22B Thinking 2507 locally?

Yes — Qwen3 235B A22B Thinking 2507 can run locally on consumer hardware. At BF16 quantization it needs 470.5 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

What's the download size of Qwen3 235B A22B Thinking 2507?

At BF16, the download is about 470.00 GB.