Mamba 130M HF — Hardware Requirements & GPU Compatibility
ChatMamba 130M is a state-space model developed by State Spaces that offers a fundamentally different architecture from the Transformer-based models that dominate the LLM landscape. Using selective state-space layers instead of attention, Mamba achieves linear-time inference scaling with sequence length, making it particularly efficient for processing long inputs. At 130 million parameters this is primarily a research and demonstration model, but it showcases the potential of state-space architectures for local deployment. Users interested in exploring alternatives to Transformer-based language models will find Mamba 130M a lightweight and accessible entry point for experimentation.
Specifications
- Publisher
- State Spaces
- Parameters
- 129M
- Architecture
- MambaForCausalLM
- Vocabulary Size
- 50,280
- Release Date
- 2024-03-06
Get Started
HuggingFace
How Much VRAM Does Mamba 130M HF Need?
Select a quantization to see compatible GPUs below.
| Quantization | Bits | VRAM | + Context | File Size | Quality |
|---|---|---|---|---|---|
| BF16 | 16.00 | 0.3 GB | — | 0.26 GB | Brain floating point 16 — preferred for training |
Which GPUs Can Run Mamba 130M HF?
BF16 · 0.3 GBMamba 130M HF (BF16) requires 0.3 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 1+ GB is recommended. 35 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti.
Runs great
— Plenty of headroomWhich Devices Can Run Mamba 130M HF?
BF16 · 0.3 GB33 devices with unified memory can run Mamba 130M HF, including NVIDIA DGX H100, NVIDIA DGX A100 640GB.
Runs great
— Plenty of headroomRelated Models
Derivatives (1)
Frequently Asked Questions
- How much VRAM does Mamba 130M HF need?
Mamba 130M HF requires 0.3 GB of VRAM at BF16.
VRAM = Weights + KV Cache + Overhead
Weights = 129M × 16 bits ÷ 8 = 0.3 GB
VRAM usage by quantization
BF160.3 GB- Can I run Mamba 130M HF on a Mac?
Mamba 130M HF requires at least 0.3 GB at BF16, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.
- Can I run Mamba 130M HF locally?
Yes — Mamba 130M HF can run locally on consumer hardware. At BF16 quantization it needs 0.3 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.
- How fast is Mamba 130M HF?
At BF16, Mamba 130M HF can reach ~10411 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~2340 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.
tok/s = (bandwidth GB/s ÷ model GB) × efficiency
Example: AMD Instinct MI300X → 5300 ÷ 0.3 × 0.55 = ~10411 tok/s
Estimated speed at BF16 (0.3 GB)
AMD Instinct MI300X~10411 tok/sNVIDIA GeForce RTX 4090~2340 tok/sNVIDIA H100 SXM~7781 tok/sAMD Instinct MI250X~6437 tok/sReal-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.
- What's the download size of Mamba 130M HF?
At BF16, the download is about 0.26 GB.