NemoClaw is an open-source reference stack built and maintained by NVIDIA. It provides:
When combined with OpenClaw, NemoClaw provides the secure execution layer that enterprise teams need when deploying AI agents on sensitive workloads.
Before installing NemoClaw, confirm your system meets the minimum requirements:
Supported operating systems:
Required software:
For local inference with Nemotron models, you'll need significantly more resources — at least 16 GB RAM and a capable GPU is recommended for the larger Nemotron variants. If you're using NVIDIA's cloud Endpoint API for inference, the base requirements above are sufficient.
NemoClaw installs with a single command:
curl -fsSL https://nvidia.com/nemoclaw.sh | bash
The install script will:
https://nvidia.com/nemoclaw.sh before running, especially in production environments.
One of NemoClaw's key design decisions is its flexibility around model inference. You have two primary options:
Running models locally keeps your data on-premises. NemoClaw supports local inference via Ollama, which handles model downloads and serving. This is the right choice when:
For local inference, you'll need sufficient hardware — Nemotron 3 Super 120B is a large model. Smaller Nemotron variants or other Ollama-compatible models work well on consumer hardware within the RAM requirements above.
NVIDIA's Endpoint API gives you access to the full Nemotron model lineup, including Nemotron 3 Super 120B, without running the model locally. This is suitable when:
You'll need an NVIDIA developer account and API credentials to use this option.
NemoClaw's privacy router is one of its more practical enterprise features. It lets you define rules that determine whether a given request goes to a local model or a cloud model based on content classification. For example: internal business data stays local, general queries can use cloud inference.
NemoClaw uses NVIDIA's OpenShell runtime to provide sandboxed execution. The security model is built on four policy-controlled layers:
These aren't optional add-ons — they're the default execution model. Every agent workload in NemoClaw runs inside the OpenShell sandbox. You configure the policies; the runtime enforces them.
Rather than writing security policies from scratch, NemoClaw ships with ready-made policy presets for common deployment scenarios:
To apply a policy preset during configuration, select the preset that matches your deployment. Presets can be stacked and customized — you're not locked into using them exactly as shipped.
NemoClaw is designed to work alongside OpenClaw, NVIDIA's open-source AI agent framework. In a typical setup:
Together, they form a complete AI agent stack — OpenClaw manages what the agent does, NemoClaw controls how it does it securely. If you're already running OpenClaw agents and want to add Nemotron model support with enterprise security controls, NemoClaw is the recommended upgrade path.
As of March 16, 2026, NemoClaw is in alpha. A few things worth knowing before you deploy:
If you're evaluating NemoClaw for an enterprise deployment, plan for these constraints and factor in the time required to stay current with upstream changes.
NemoClaw is a good fit if you're:
It's a heavier stack than a minimal OpenClaw install — the security infrastructure and Docker dependencies add setup overhead. For teams without dedicated DevOps capacity, the managed setup route may be more practical.
We build custom AI agents for solopreneurs and small business owners. Book a free 15-minute call — no commitment.
Book a free call → ← More articles