Meta·Llama 2

Llama 2 13B Chat HF — Hardware Requirements & GPU Compatibility

Chat

Meta Llama 2 13B Chat is a 13-billion parameter instruction-tuned model from Meta's Llama 2 family, fine-tuned for dialogue and chat applications. It offers improved reasoning and generation quality over the 7B variant while maintaining manageable hardware requirements with a 4K token context window. The model was fine-tuned using supervised fine-tuning and RLHF. It can run on consumer GPUs with 16GB or more of VRAM at reduced precision. Released under the Llama 2 Community License.

146.9K downloads 1.1K likesApr 2024

Specifications

Publisher
Meta
Family
Llama 2
Parameters
13B
Release Date
2024-04-17
License
Llama 2 Community

Get Started

How Much VRAM Does Llama 2 13B Chat HF Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
BF1616.0028.6 GB

Which GPUs Can Run Llama 2 13B Chat HF?

BF16 · 28.6 GB

Llama 2 13B Chat HF (BF16) requires 28.6 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 38+ GB is recommended. 1 GPU can run it, including NVIDIA GeForce RTX 5090.

All compatible consumer-level GPUs are running near their VRAM limit. You may also want to consider professional GPUs (e.g., NVIDIA A100, H100) which offer significantly more VRAM. For more headroom and better throughput, consider a multi-GPU configuration with tensor parallelism (supported by tools like vLLM, llama.cpp, or text-generation-inference).

Decent

Enough VRAM, may be tight

Which Devices Can Run Llama 2 13B Chat HF?

BF16 · 28.6 GB

15 devices with unified memory can run Llama 2 13B Chat HF, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Studio M4 Max (36 GB).

Related Models

Frequently Asked Questions

How much VRAM does Llama 2 13B Chat HF need?

Llama 2 13B Chat HF requires 28.6 GB of VRAM at BF16.

VRAM = Weights + KV Cache + Overhead

Weights = 13B × 16 bits ÷ 8 = 26 GB

KV Cache + Overhead 2.6 GB (at 2K context + ~0.3 GB framework)

VRAM usage by quantization

28.6 GB

Learn more about VRAM estimation →

Can I run Llama 2 13B Chat HF on a Mac?

Llama 2 13B Chat HF requires at least 28.6 GB at BF16, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Llama 2 13B Chat HF locally?

Yes — Llama 2 13B Chat HF can run locally on consumer hardware. At BF16 quantization it needs 28.6 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

How fast is Llama 2 13B Chat HF?

At BF16, Llama 2 13B Chat HF can reach ~102 tok/s on AMD Instinct MI300X. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.

tok/s = (bandwidth GB/s ÷ model GB) × efficiency

Example: AMD Instinct MI300X5300 ÷ 28.6 × 0.55 = ~102 tok/s

Estimated speed at BF16 (28.6 GB)

~102 tok/s
~76 tok/s
~63 tok/s

Real-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.

Learn more about tok/s estimation →

What's the download size of Llama 2 13B Chat HF?

At BF16, the download is about 26.00 GB.