BSC-LT·LlamaForCausalLM

Salamandra 7B Instruct — Hardware Requirements & GPU Compatibility

Chat

Salamandra 7B Instruct is a 7.8-billion-parameter multilingual model developed by the Barcelona Supercomputing Center (BSC-LT) as part of a European initiative to build high-quality open language models. It has particular strength in Iberian languages including Spanish, Catalan, Portuguese, and Basque, while also supporting English and other major European languages. This model is an excellent choice for users who need strong performance in Spanish or other Iberian languages that are often underserved by mainstream LLMs. Running it locally ensures data privacy for sensitive multilingual workflows, and at 7B parameters it fits comfortably on a single consumer GPU with 8 GB or more of VRAM.

70.3K downloads 76 likesOct 20258K context

Specifications

Publisher
BSC-LT
Parameters
7.8B
Architecture
LlamaForCausalLM
Context Length
8,192 tokens
Vocabulary Size
256,000
Release Date
2025-10-22
License
Apache 2.0

Get Started

How Much VRAM Does Salamandra 7B Instruct Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
BF1616.0016.1 GB

Which GPUs Can Run Salamandra 7B Instruct?

BF16 · 16.1 GB

Salamandra 7B Instruct (BF16) requires 16.1 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 21+ GB is recommended. Using the full 8K context window can add up to 0.8 GB, bringing total usage to 16.9 GB. 6 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti.

Which Devices Can Run Salamandra 7B Instruct?

BF16 · 16.1 GB

21 devices with unified memory can run Salamandra 7B Instruct, including NVIDIA DGX H100, NVIDIA DGX A100 640GB, Mac Mini M4 Pro (24 GB).

Related Models

Frequently Asked Questions

How much VRAM does Salamandra 7B Instruct need?

Salamandra 7B Instruct requires 16.1 GB of VRAM at BF16. Full 8K context adds up to 0.8 GB (16.9 GB total).

VRAM = Weights + KV Cache + Overhead

Weights = 7.8B × 16 bits ÷ 8 = 15.5 GB

KV Cache + Overhead 0.6 GB (at 2K context + ~0.3 GB framework)

KV Cache + Overhead 1.4 GB (at full 8K context)

VRAM usage by quantization

16.1 GB
16.9 GB

Learn more about VRAM estimation →

Can I run Salamandra 7B Instruct on a Mac?

Salamandra 7B Instruct requires at least 16.1 GB at BF16, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Salamandra 7B Instruct locally?

Yes — Salamandra 7B Instruct can run locally on consumer hardware. At BF16 quantization it needs 16.1 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

How fast is Salamandra 7B Instruct?

At BF16, Salamandra 7B Instruct can reach ~181 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~41 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.

tok/s = (bandwidth GB/s ÷ model GB) × efficiency

Example: AMD Instinct MI300X5300 ÷ 16.1 × 0.55 = ~181 tok/s

Estimated speed at BF16 (16.1 GB)

~181 tok/s
~41 tok/s
~135 tok/s
~112 tok/s

Real-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.

Learn more about tok/s estimation →

What's the download size of Salamandra 7B Instruct?

At BF16, the download is about 15.54 GB.