gemma4.devgemma4.dev
MkSaaS Docs
gemma4.devgemma4.dev
MkSaaS Docs
HomepageGemma 4 Developer Hub
Getting StartedQuickstart — OllamaHardware Requirements
X (Twitter)
Getting Started

Getting Started

Run your first Gemma 4 model in under 5 minutes.

Getting Started

Pick a runtime and get Gemma 4 running locally or in the cloud.

Choose your path

Ollama

Easiest. Single command on Mac, Linux, Windows. Recommended for most users.

LM Studio

GUI-based. No terminal required. Great for Windows users.

Hugging Face

Full Python control. Best for fine-tuning and ML pipelines.

llama.cpp

C++ backend. CUDA/Metal. Maximum quantization control.

Gemma 4 Developer Hub

Everything you need to run, deploy, and debug Gemma 4 models — from edge devices to production clusters.

Quickstart — Ollama

Run Gemma 4 E4B locally in 3 commands using Ollama.

Table of Contents

Getting Started
Choose your path