gemma4.devgemma4.dev
  • Models
  • Run Local
  • Deploy
  • Guides
Try Gemma 4 ↗
gemma4.devgemma4.dev
Models/Compare All Models

Compare Gemma 4 Models

Full specification comparison of all four Gemma 4 models: E2B, E4B, 26B A4B, and 31B. Use the table below to identify which model fits your hardware and use case.

SpecE2BE4B ★ Popular26B A4B31B
Parameters2.1B4.4B26.1B (MoE)31B
ArchitectureDenseDenseSparse MoEDense
Context Length8K32K128K256K
VRAM (BF16)5 GB10 GB28 GB64 GB
VRAM (Q4)2 GB4 GB14 GB18 GB
Multimodal✗✓✓✓
Thinking Mode✗✓✓✓
Tool Use✓✓✓✓
LicenseGemma TermsGemma TermsGemma TermsGemma Terms
Best ForEdge / CPUDaily driverLong contextEnterprise

Which model should I use?

Answer one question about your hardware or use case to get a direct recommendation.

I have under 8 GB VRAM

E2B

2 GB Q4 — runs on CPU too

I want the best balance

E4B

4 GB Q4 — multimodal + thinking

I need 128K context

26B A4B

14 GB Q4 — MoE architecture

I need maximum quality

31B

18 GB Q4 — 256K context

gemma4.devgemma4.dev

运行、部署和调试 Gemma 4 模型。专为快节奏开发者打造。

GitHubGitHubTwitterX (Twitter)Email
Models
  • Gemma 4 E2B
  • Gemma 4 E4B
  • Gemma 4 26B
  • Gemma 4 31B
  • Compare Models
Run Local
  • Ollama
  • Hugging Face
  • GGUF
  • LM Studio
  • llama.cpp
Deploy
  • vLLM
  • Gemini API
  • Vertex AI
  • Cloud Run
Guides & Help
  • Thinking Mode
  • Prompt Formatting
  • Function Calling
  • Error Fixes
© 2026 gemma4.dev All Rights Reserved.