Getting StartedGetting StartedRun your first Gemma 4 model in under 5 minutes.Copy MarkdownOpenGetting Started Pick a runtime and get Gemma 4 running locally or in the cloud. Choose your path OllamaEasiest. Single command on Mac, Linux, Windows. Recommended for most users.LM StudioGUI-based. No terminal required. Great for Windows users.Hugging FaceFull Python control. Best for fine-tuning and ML pipelines.llama.cppC++ backend. CUDA/Metal. Maximum quantization control.Gemma 4 Developer HubEverything you need to run, deploy, and debug Gemma 4 models — from edge devices to production clusters.Quickstart — OllamaRun Gemma 4 E4B locally in 3 commands using Ollama.