DavidAU·Llama 3

Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF — Hardware Requirements & GPU Compatibility

ChatRoleplay

A creative frankenmerge Mixture-of-Experts model built by DavidAU from eight copies of Llama 3.2 3B, totaling 18.4 billion parameters with only a fraction active per token. This abliterated, uncensored MoE is specifically designed for roleplay, creative writing, and storytelling without content restrictions. As a custom MoE architecture, this model offers an unusual tradeoff: it provides the diversity and capacity of a much larger model while keeping per-token compute closer to a 3B model. The GGUF format makes it straightforward to run locally. Best suited for creative and narrative use cases rather than factual or analytical tasks. Expect imaginative and unrestricted outputs with the distinctive character of community-built experimental merges.

75.5K downloads 511 likesDec 2025

Specifications

Publisher
DavidAU
Family
Llama 3
Parameters
3B
Release Date
2025-12-01
License
Apache 2.0

Get Started

How Much VRAM Does Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF Need?

Select a quantization to see compatible GPUs below.

QuantizationBitsVRAM
Q2_K3.401.4 GB
Q3_K_S3.501.4 GB
Q3_K_M3.901.6 GB
Q4_04.001.6 GB
Q3_K_L4.101.7 GB
IQ4_XS4.301.8 GB
Q4_K_S4.501.9 GB
Q4_K_M4.802.0 GB
Q5_K_S5.502.3 GB
Q5_K_M5.702.4 GB
Q6_K6.602.7 GB
Q8_08.003.3 GB

Which GPUs Can Run Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF?

Q4_K_M · 2.0 GB

Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF (Q4_K_M) requires 2.0 GB of VRAM to load the model weights. For comfortable inference with headroom for KV cache and system overhead, 3+ GB is recommended. 35 GPUs can run it, including NVIDIA GeForce RTX 5090, NVIDIA GeForce RTX 3090 Ti.

Which Devices Can Run Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF?

Q4_K_M · 2.0 GB

33 devices with unified memory can run Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF, including NVIDIA DGX H100, NVIDIA DGX A100 640GB.

Related Models

Frequently Asked Questions

How much VRAM does Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF need?

Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF requires 2.0 GB of VRAM at Q4_K_M, or 3.3 GB at Q8_0.

VRAM = Weights + KV Cache + Overhead

Weights = 3B × 4.8 bits ÷ 8 = 1.8 GB

KV Cache + Overhead 0.2 GB (at 2K context + ~0.3 GB framework)

VRAM usage by quantization

2.0 GB

Learn more about VRAM estimation →

What's the best quantization for Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF?

For Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF, Q4_K_M (2.0 GB) offers the best balance of quality and VRAM usage. Q5_K_S (2.3 GB) provides better quality if you have the VRAM. The smallest option is Q2_K at 1.4 GB.

VRAM requirement by quantization

Q2_K
1.4 GB
Q4_0
1.6 GB
Q4_K_S
1.9 GB
Q4_K_M
2.0 GB
Q5_K_S
2.3 GB
Q8_0
3.3 GB

★ Recommended — best balance of quality and VRAM usage.

Learn more about quantization →

Can I run Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF on a Mac?

Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF requires at least 1.4 GB at Q2_K, which exceeds the unified memory of most consumer Macs. You would need a Mac Studio or Mac Pro with a high-memory configuration.

Can I run Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF locally?

Yes — Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF can run locally on consumer hardware. At Q4_K_M quantization it needs 2.0 GB of VRAM. Popular tools include Ollama, LM Studio, and llama.cpp.

How fast is Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF?

At Q4_K_M, Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF can reach ~1472 tok/s on AMD Instinct MI300X. On NVIDIA GeForce RTX 4090: ~331 tok/s. Speed depends mainly on GPU memory bandwidth. Real-world results typically within ±20%.

tok/s = (bandwidth GB/s ÷ model GB) × efficiency

Example: AMD Instinct MI300X5300 ÷ 2.0 × 0.55 = ~1472 tok/s

Estimated speed at Q4_K_M (2.0 GB)

~1472 tok/s
~331 tok/s
~1100 tok/s
~910 tok/s

Real-world results typically within ±20%. Speed depends on batch size, quantization kernel, and software stack.

Learn more about tok/s estimation →

What's the download size of Llama 3.2 8X3B MOE Dark Champion Instruct Uncensored Abliterated 18.4B GGUF?

At Q4_K_M, the download is about 1.80 GB. The full-precision Q8_0 version is 3.00 GB. The smallest option (Q2_K) is 1.27 GB.