Google·Gemma 2

Gemma 2 27B IT — Hardware Requirements & GPU Compatibility

Chat

Google Gemma 2 27B IT is a 27.2-billion parameter instruction-tuned model from Google's Gemma 2 generation. It is a text-only chat model optimized for conversational use, reasoning, and instruction following. Gemma 2 27B IT was one of the strongest openly available models in its size class at release. The model requires a GPU with at least 24GB of VRAM for quantized local inference. It is widely supported by popular inference engines and remains a strong choice for users seeking high-quality local chat without needing 70B-class hardware. Released under the Gemma license.

401.5K downloads 560 likes8K context

Specifications

Publisher
Google
Family
Gemma 2
Parameters
27.2B
Context Length
8,192 tokens
License
Gemma Terms

Get Started

How Much VRAM Does Gemma 2 27B IT Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
IQ2_XS2.409.0 GB
IQ2_S2.509.4 GB
IQ2_M2.7010.1 GB
IQ3_XXS3.1011.6 GB
IQ3_XS3.3012.3 GB
Q2_K3.4012.7 GB
IQ3_S3.4012.7 GB
Q3_K_S3.5013.1 GB
IQ3_M3.6013.5 GB
Q3_K_M3.9014.6 GB
Q3_K_L4.1015.3 GB
IQ4_XS4.3016.1 GB
Q4_K_S4.5016.9 GB
Q4_K_M4.8018.0 GB
Q4_K_L4.9018.3 GB
Q5_K_S5.5020.6 GB
Q5_K_M5.7021.3 GB
Q5_K_L5.8021.7 GB
Q6_K6.6024.7 GB
Q8_08.0029.9 GB

Which GPUs Can Run Gemma 2 27B IT?

Q4_K_M · 18.0 GB

Gemma 2 27B IT (Q4_K_M) requires 18.0 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 24+ GB is recommended. 6 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti.

Which Devices Can Run Gemma 2 27B IT?

Q4_K_M · 18.0 GB

21 devices with unified memory can run Gemma 2 27B IT, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Mini M4 Pro (24 GB).

Related Models

Frequently Asked Questions

How much VRAM does Gemma 2 27B IT need?

Gemma 2 27B IT requires 18.0 GB of VRAM at Q4_K_M, or 29.9 GB at Q8_0.

VRAM = Weights + KV Cache + Overhead

Weights = 27.2B × 4.8 bits ÷ 8 = 16.3 GB

KV Cache + Overhead 1.7 GB (at 2K context + ~0.3 GB framework)

VRAM usage by quantization

18.0 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 4090 run Gemma 2 27B IT?

Yes, at Q5_K_L (21.7 GB) or lower. Higher quantizations like Q6_K (24.7 GB) exceed the NVIDIA GeForce RTX 4090's 24 GB.

What's the best quantization for Gemma 2 27B IT?

For Gemma 2 27B IT, Q4_K_M (18.0 GB) offers the best balance of quality and VRAM usage. Q4_K_L (18.3 GB) provides better quality if you have the VRAM. The smallest option is IQ2_XS at 9.0 GB.

VRAM requirement by quantization

IQ2_XS
9.0 GB
Q2_K
12.7 GB
Q3_K_L
15.3 GB
Q4_K_M
18.0 GB
Q4_K_L
18.3 GB
Q8_0
29.9 GB

★ Recommended — best balance of quality and VRAM usage.

Learn more about quantization →

Can I run Gemma 2 27B IT on a Mac?

Gemma 2 27B IT requires at least 9.0 GB at IQ2_XS, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Gemma 2 27B IT locally?

Yes — Gemma 2 27B IT can run locally on consumer hardware. At Q4_K_M quantization it needs 18.0 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

How fast is Gemma 2 27B IT?

At Q4_K_M, Gemma 2 27B IT can reach ~162 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~37 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.

tok/s = (bandwidth GB/s ÷ model GB) × efficiency

Example: AMD Instinct MI300X5300 ÷ 18.0 × 0.55 = ~162 tok/s

Estimated speed at Q4_K_M (18.0 GB)

~162 tok/s
~37 tok/s
~121 tok/s
~100 tok/s

Real-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.

Learn more about tok/s estimation →

What's the download size of Gemma 2 27B IT?

At Q4_K_M, the download is about 16.34 GB. The full-precision Q8_0 version is 27.23 GB. The smallest option (IQ2_XS) is 8.17 GB.