Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.ollama.com/llms.txt

Use this file to discover all available pages before exploring further.

NemoClaw is NVIDIA’s open source security stack for OpenClaw. It wraps OpenClaw with the NVIDIA OpenShell runtime to provide kernel-level sandboxing, network policy controls, and audit trails for AI agents.

Quick start

Pull a model:
ollama pull nemotron-3-nano:30b
Run the installer:
curl -fsSL https://www.nvidia.com/nemoclaw.sh | \
  NEMOCLAW_NON_INTERACTIVE=1 \
  NEMOCLAW_PROVIDER=ollama \
  NEMOCLAW_MODEL=nemotron-3-nano:30b \
  bash
Connect to your sandbox:
nemoclaw my-assistant connect
Open the TUI:
openclaw tui
Ollama support in NemoClaw is still experimental.

Platform support

PlatformRuntimeStatus
Linux (Ubuntu 22.04+)DockerPrimary
macOS (Apple Silicon)Colima or Docker DesktopSupported
WindowsWSL2 with Docker DesktopSupported
CMD and PowerShell are not supported on Windows — WSL2 is required.
Ollama must be installed and running before the installer runs. When running inside WSL2 or a container, ensure Ollama is reachable from the sandbox (e.g. OLLAMA_HOST=0.0.0.0).

System requirements

  • CPU: 4 vCPU minimum
  • RAM: 8 GB minimum (16 GB recommended)
  • Disk: 20 GB free (40 GB recommended for local models)
  • Node.js 20+ and npm 10+
  • Container runtime (Docker preferred)
  • nemotron-3-super:cloud — Strong reasoning and coding
  • qwen3.5:cloud — 397B; reasoning and code generation
  • nemotron-3-nano:30b — Recommended local model; fits in 24 GB VRAM
  • qwen3.5:27b — Fast local reasoning (~18 GB VRAM)
  • glm-4.7-flash — Reasoning and code generation (~25 GB VRAM)
More models at ollama.com/search.