Local LLM Hardware Checker
Can your PC run this model?
Search model, GPU, or use case. Get fast hardware-fit answers for local LLMs.
Choose a goal
What do you want AI to do?
Check hardware
Can your GPU handle it?
Pick a model
Right model for your specs
Follow a guide
Set up and start running
Check your hardware compatibility
We detect your GPU and RAM automatically. Edit values if needed, then see which models fit.
Detected hardware
GPU
Not detected
System RAM
Unknown
CPU Cores
Unknown
Recently added models
Fresh releases from recent sync runs. Good repeat-visit hook.
CodeLlama 34B
Added to catalog 4/18/2026.
NewMistral 7B v0.3
Added to catalog 4/18/2026.
NewMistral Nemo 12B
Added to catalog 4/18/2026.
NewMixtral 8x7B
Added to catalog 4/18/2026.
NewMixtral 8x22B
Added to catalog 4/18/2026.
NewCodestral 22B
Added to catalog 4/18/2026.
NewPhi-3 Mini 3.8B
Added to catalog 4/18/2026.
NewMistral Small 24B
Added to catalog 4/18/2026.
Browse by collection
Curated entry points for low VRAM, coding, reasoning, multimodal, and family pages.
Popular starter models
Most beginners start with Ollama + Llama 3.1 8B.
Llama 3.1 8B
Best all-around local model for most laptops and desktops
Gemma 4 E4B
Small multimodal upgrade with strong on-device quality
DeepSeek R1 7B
Reasoning-first pick with good budget hardware fit
Llama 4 Scout 17B
Higher-quality option for users with mid/high VRAM GPUs
AI Agent
Run OpenClaw on your hardware
Check GPU, VRAM, and RAM requirements for OpenClaw and find compatible local models for your PC or Mac.