Documentation
Setup Guides
Step-by-step instructions for running local LLMs on your hardware. Pick a backend to get started.
Run Local LLMs with Ollama
The easiest way to run LLMs locally. One command to install, one command to run any model.
Run Local LLMs with llama.cpp
Maximum control over LLM inference. Build from source, run GGUF models on CPU, GPU, or mixed mode.
Run Local LLMs with LM Studio
A desktop app for downloading, configuring, and chatting with local LLMs. No command line required.
Run Local LLMs with Jan
An open-source ChatGPT alternative that runs 100% offline. Clean UI, local model management, and extensible via plugins.
Run Local LLMs with GPT4All
A free, local, privacy-aware chatbot. No GPU required — runs on CPU with decent speed for smaller models.
OpenClaw Setup Guide: Run OpenClaw with a Local LLM
Step-by-step OpenClaw setup guide for Ollama, llama.cpp, and LM Studio. Run OpenClaw with a local LLM for privacy, lower cost, and faster iteration.
Need help choosing before setup?
Use the learning hub to decide runtime, review model tradeoffs, and validate with eval templates.