GPUs with 20–undefined GB VRAM
Browse 25 GPUs with 20–undefined GB VRAM compatible with running LLM models locally. Compare VRAM, memory bandwidth, and AI performance.
← Show all GPUsWhich GPU Do You Need for AI?
The amount of VRAM is the most important specification for running LLMs locally. Most 7B parameter models require 4–8 GB of VRAM at common quantization levels, while 70B models need 24–48 GB. Memory bandwidth determines how fast the model generates tokens — faster bandwidth means faster responses.
GPU List
AMD Instinct MI210
AMD · CDNA 2
AMD Instinct MI250X
AMD · CDNA 2
AMD Instinct MI300X
AMD · CDNA 3
AMD Radeon PRO W7800
AMD · RDNA 3
AMD Radeon PRO W7900
AMD · RDNA 3
AMD Radeon RX 7900 XT
AMD · RDNA 3
AMD Radeon RX 7900 XTX
AMD · RDNA 3
NVIDIA A100 40GB PCIe
NVIDIA · Ampere
NVIDIA A100 80GB SXM
NVIDIA · Ampere
NVIDIA A40
NVIDIA · Ampere
NVIDIA GeForce RTX 3090
NVIDIA · Ampere
NVIDIA GeForce RTX 3090 Ti
NVIDIA · Ampere
NVIDIA GeForce RTX 4090
NVIDIA · Ada Lovelace
NVIDIA GeForce RTX 5090
NVIDIA · Blackwell
NVIDIA H100 PCIe
NVIDIA · Hopper
NVIDIA H100 SXM
NVIDIA · Hopper
NVIDIA L4
NVIDIA · Ada Lovelace
NVIDIA L40
NVIDIA · Ada Lovelace
NVIDIA L40S
NVIDIA · Ada Lovelace
NVIDIA RTX 4000 Ada Generation
NVIDIA · Ada Lovelace
NVIDIA RTX 5000 Ada Generation
NVIDIA · Ada Lovelace
NVIDIA RTX 6000 Ada Generation
NVIDIA · Ada Lovelace
NVIDIA RTX A5000
NVIDIA · Ampere
NVIDIA RTX A6000
NVIDIA · Ampere
NVIDIA V100 SXM2 32GB
NVIDIA · Volta