Quick start
Pull a model:Ollama support in NemoClaw is still experimental.
Platform support
| Platform | Runtime | Status |
|---|---|---|
| Linux (Ubuntu 22.04+) | Docker | Primary |
| macOS (Apple Silicon) | Colima or Docker Desktop | Supported |
| Windows | WSL2 with Docker Desktop | Supported |
Ollama must be installed and running before the installer runs. When running inside WSL2 or a container, ensure Ollama is reachable from the sandbox (e.g.
OLLAMA_HOST=0.0.0.0).System requirements
- CPU: 4 vCPU minimum
- RAM: 8 GB minimum (16 GB recommended)
- Disk: 20 GB free (40 GB recommended for local models)
- Node.js 20+ and npm 10+
- Container runtime (Docker preferred)
Recommended models
nemotron-3-super:cloud— Strong reasoning and codingqwen3.5:cloud— 397B; reasoning and code generationnemotron-3-nano:30b— Recommended local model; fits in 24 GB VRAMqwen3.5:27b— Fast local reasoning (~18 GB VRAM)glm-4.7-flash— Reasoning and code generation (~25 GB VRAM)

