All LLM Models
Browse 12 LLM models with VRAM requirements, quantization options, and hardware compatibility.
Understanding LLM VRAM Requirements
How much VRAM you need depends on the model size and quantization level. Quantization reduces the precision of model weights, trading small quality losses for significantly lower VRAM usage. For example, a 7B parameter model needs ~14 GB at FP16 but only ~4 GB at Q4_K_M quantization.
Model List
Gemma 7B
Google · 7B
Google Gemma 7B is a 7-billion parameter base (pretrained) model from the original Gemma generation, Google's first openly available family of language models. It represents Google's initial entry into the open-weight LLM space. While superseded by Gemma 2 and Gemma 3 in terms of benchmark performance, the original Gemma 7B remains a solid foundation model and a useful reference point in the evolution of Google's open models. Released under the Gemma license.
Gemma 3 27B IT
Google · 27.4B
Google Gemma 3 27B IT is a 27.4-billion parameter multimodal instruction-tuned model from Google's Gemma 3 family. It supports both text and image inputs, making it one of the most capable openly available vision-language models for local inference. The model handles conversational AI, visual question answering, image description, and complex reasoning tasks across modalities. Gemma 3 27B IT requires a GPU with at least 24GB of VRAM for quantized inference, placing it within reach of high-end consumer cards like the RTX 4090. It uses a dense Transformer architecture with a large context window and benefits from Google's extensive pretraining pipeline. Released under the Gemma license.
Gemma 2 2B IT
Google · 2B
Google Gemma 2 2B IT is a 2-billion parameter instruction-tuned model from Google's Gemma 2 family, the smallest variant in the Gemma 2 series. It is designed for efficient local inference on resource-constrained hardware, handling basic conversational tasks and simple instruction following at minimal compute cost. The model can run on GPUs with as little as 4GB of VRAM when quantized, and even on CPU-only setups. Released under the Gemma license.
Gemma 7B IT
Google · 7B
Google Gemma 7B IT is a 7-billion parameter instruction-tuned model from the original Gemma generation. It is fine-tuned for conversational use and general instruction following, running efficiently on consumer GPUs with 8GB or more of VRAM. As a first-generation Gemma model, it has been superseded by Gemma 2 and Gemma 3 models in quality and capability, but it remains well-supported by inference frameworks. Released under the Gemma license.
Gemma 3 270M
Google · 270M
Google Gemma 3 270M is a 270-million parameter base (pretrained) model from Google's Gemma 3 family. It is an experimental release intended for research, fine-tuning, and exploring the capabilities of ultra-small language models. The model runs on virtually any hardware with negligible resource requirements. Released under the Gemma license.
Gemma 3 1B IT
Google · 1B
Google Gemma 3 1B IT is a 1-billion parameter instruction-tuned model from Google's Gemma 3 family. It is an ultra-compact text-only chat model designed for deployment on minimal hardware, including low-VRAM GPUs and edge devices. The model handles basic conversational tasks, simple instruction following, and lightweight text generation. It can run on virtually any modern GPU and even on CPU-only setups with acceptable latency. Released under the Gemma license.
Gemma 2 9B IT
Google · 9.2B
Google Gemma 2 9B IT is a 9.2-billion parameter instruction-tuned model from Google's Gemma 2 series. It is a text-only chat model optimized for conversational tasks, instruction following, and general-purpose assistance. At release, it was recognized for delivering unusually strong performance relative to its parameter count. The model runs efficiently on consumer GPUs with 8-12GB of VRAM in quantized formats, making it accessible on mainstream hardware. It is a popular choice for local inference among users who want strong quality without the VRAM demands of larger models. Released under the Gemma license.
Gemma 3 12B IT
Google · 12B
Google Gemma 3 12B IT is a 12-billion parameter multimodal instruction-tuned model from Google's Gemma 3 series. It supports both text and image inputs, offering vision-language capabilities at a more accessible size point than the 27B variant. Gemma 3 12B IT runs on consumer GPUs with 12-16GB of VRAM in quantized formats, making it a practical choice for local multimodal inference without requiring top-tier hardware. Released under the Gemma license.
Gemma 2 2B
Google · 2B
Google Gemma 2 2B is a 2-billion parameter base (pretrained) model from Google's Gemma 2 family. As a base model, it is not instruction-tuned and is intended for fine-tuning, research, and custom downstream applications. Its compact size makes it suitable for experimentation, rapid prototyping, and domain-specific fine-tuning on consumer hardware with minimal VRAM. Released under the Gemma license.
Gemma 3 270M IT
Google · 270M
Google Gemma 3 270M IT is a 270-million parameter instruction-tuned model from Google's Gemma 3 family, an experimental release pushing the boundaries of how small an effective chat model can be. The model runs on virtually any hardware, including entry-level GPUs and CPU-only setups, making it useful for experimentation, education, and exploring the limits of small-scale language modeling. Released under the Gemma license.
Gemma 2 27B IT
Google · 27.2B
Google Gemma 2 27B IT is a 27.2-billion parameter instruction-tuned model from Google's Gemma 2 generation. It is a text-only chat model optimized for conversational use, reasoning, and instruction following. Gemma 2 27B IT was one of the strongest openly available models in its size class at release. The model requires a GPU with at least 24GB of VRAM for quantized local inference. It is widely supported by popular inference engines and remains a strong choice for users seeking high-quality local chat without needing 70B-class hardware. Released under the Gemma license.
Medgemma 27B Text IT
Google · 27B
Google MedGemma 27B Text IT is a 27-billion parameter instruction-tuned model specialized for the medical domain, built on the Gemma architecture by Google. It is fine-tuned on medical and clinical text data to provide improved performance on healthcare-related tasks such as medical question answering, clinical reasoning, and health information summarization. The model requires a GPU with at least 24GB of VRAM for quantized inference. Its domain specialization makes it notably more capable than general models on clinical benchmarks, though it should not be used as a substitute for professional medical advice. Released under the Gemma license.