TII UAE·Falcon

Falcon 180B — Hardware Requirements & GPU Compatibility

Chat
173 downloads 1.1K likes

Specifications

Publisher
TII UAE
Family
Falcon
Parameters
180B
Release Date
2023-09-06
License
unknown

Get Started

How Much VRAM Does Falcon 180B Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
BF1616.00396 GB

Which GPUs Can Run Falcon 180B?

BF16 · 396 GB

Falcon 180B (BF16) requires 396 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 515+ GB is recommended. No single GPU has enough memory — multi-GPU or cluster setups are needed.

Which Devices Can Run Falcon 180B?

BF16 · 396 GB

2 devices with unified memory can run Falcon 180B, including NVIDIA DGX H100, NVIDIA DGX A100 640GB.

Related Models

Frequently Asked Questions

How much VRAM does Falcon 180B need?

Falcon 180B requires 396 GB of VRAM at BF16.

VRAM = Weights + KV Cache + Overhead

Weights = 180B × 16 bits ÷ 8 = 360 GB

KV Cache + Overhead 36 GB (at 2K context + ~0.3 GB framework)

VRAM usage by quantization

396.0 GB

Learn more about VRAM estimation →

Can NVIDIA GeForce RTX 5090 run Falcon 180B?

No — Falcon 180B requires at least 396 GB at BF16, which exceeds the NVIDIA GeForce RTX 5090's 32 GB of VRAM.

Can I run Falcon 180B on a Mac?

Falcon 180B requires at least 396 GB at BF16, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Falcon 180B locally?

Yes — Falcon 180B can run locally on consumer hardware. At BF16 quantization it needs 396 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

What's the download size of Falcon 180B?

At BF16, the download is about 360.00 GB.

Which GPUs can run Falcon 180B?

No single consumer GPU has enough VRAM to run Falcon 180B at BF16 (396 GB). Multi-GPU or professional hardware is required.

Which devices can run Falcon 180B?

2 devices with unified memory can run Falcon 180B at BF16 (396 GB), including NVIDIA DGX A100 640GB, NVIDIA DGX H100. Apple Silicon Macs use unified memory shared between CPU and GPU, making them well-suited for local LLM inference.