All Hardware for Running LLMs Locally

Browse 56 GPUs and 33 devices — MacBooks, desktops, AI boxes, and more.

Choosing Hardware for Local AI

VRAM is the most important specification for running LLMs locally. Discrete GPUs offer dedicated VRAM, while Apple Silicon Macs share unified memory between CPU and GPU. AI development kits like NVIDIA Jetson provide optimized inference on the edge. Pick your hardware below to see which models you can run.

Discrete GPUs

View all 56

NVIDIA, AMD, and Intel GPUs — VRAM is the key spec for local LLMs

MacBooks and other laptops with unified or dedicated GPU memory

Desktops & Mini PCs

View all 9

Mac Studio, Mac Mini, custom PCs, and workstations with maximum memory

AI Development Kits

View all 3

NVIDIA Jetson, dedicated inference hardware, and edge AI devices

Browse by VRAM

Find the best models for your VRAM tier