GPUs with 10–14 GB VRAM
Browse 11 GPUs with 10–14 GB VRAM compatible with running LLM models locally. Compare VRAM, memory bandwidth, and AI performance.
← Show all GPUsWhich GPU Do You Need for AI?
The amount of VRAM is the most important specification for running LLMs locally. Most 7B parameter models require 4–8 GB of VRAM at common quantization levels, while 70B models need 24–48 GB. Memory bandwidth determines how fast the model generates tokens — faster bandwidth means faster responses.
GPU List
AMD Radeon RX 6700 XT
AMD · RDNA 2
384.0 GB/s2,560 SP230W TDP$479
AMD Radeon RX 7700 XT
AMD · RDNA 3
432.0 GB/s3,456 SP245W TDP$449
Intel Arc B580
Intel · Battlemage
456.0 GB/s190W TDP$249
NVIDIA GeForce GTX 1080 Ti
NVIDIA · Pascal
484.4 GB/s3,584 CUDA250W TDP$699
NVIDIA GeForce RTX 3060 12GB
NVIDIA · Ampere
360.0 GB/s3,584 CUDA170W TDP$329
NVIDIA GeForce RTX 3080
NVIDIA · Ampere
760.3 GB/s8,704 CUDA320W TDP$699
NVIDIA GeForce RTX 3080 Ti
NVIDIA · Ampere
912.4 GB/s10,240 CUDA350W TDP$1,199
NVIDIA GeForce RTX 4070
NVIDIA · Ada Lovelace
504.0 GB/s5,888 CUDA200W TDP$599
NVIDIA GeForce RTX 4070 SUPER
NVIDIA · Ada Lovelace
504.0 GB/s7,168 CUDA220W TDP$599
NVIDIA GeForce RTX 4070 Ti
NVIDIA · Ada Lovelace
504.0 GB/s7,680 CUDA285W TDP$799
NVIDIA GeForce RTX 5070
NVIDIA · Blackwell
672.0 GB/s6,144 CUDA250W TDP$549