HomeBlog Book a free call →
← Back to blog
NemoClaw

NemoClaw Setup Guide 2026: How to Install NVIDIA's Open-Source AI Agent Stack

NemoClaw is an open-source reference stack built and maintained by NVIDIA. It provides:

When combined with OpenClaw, NemoClaw provides the secure execution layer that enterprise teams need when deploying AI agents on sensitive workloads.

System Requirements

Before installing NemoClaw, confirm your system meets the minimum requirements:

CPU4+ vCPU
RAM8–16 GB
Disk20–40 GB free
Node.jsv20+

Supported operating systems:

Required software:

For local inference with Nemotron models, you'll need significantly more resources — at least 16 GB RAM and a capable GPU is recommended for the larger Nemotron variants. If you're using NVIDIA's cloud Endpoint API for inference, the base requirements above are sufficient.

Installation

NemoClaw installs with a single command:

curl -fsSL https://nvidia.com/nemoclaw.sh | bash

The install script will:

  1. Check your system for Docker and Node.js 20+
  2. Pull the NemoClaw container images
  3. Initialize the OpenShell security runtime
  4. Set up default security policies
  5. Configure the local agent workspace
Review before running: As with any install script piped directly to bash, it's good practice to review the script contents first at https://nvidia.com/nemoclaw.sh before running, especially in production environments.

Inference Configuration: Local vs Cloud

One of NemoClaw's key design decisions is its flexibility around model inference. You have two primary options:

Option A: Local Inference via Ollama

Running models locally keeps your data on-premises. NemoClaw supports local inference via Ollama, which handles model downloads and serving. This is the right choice when:

For local inference, you'll need sufficient hardware — Nemotron 3 Super 120B is a large model. Smaller Nemotron variants or other Ollama-compatible models work well on consumer hardware within the RAM requirements above.

Option B: Cloud Inference via NVIDIA Endpoint API

NVIDIA's Endpoint API gives you access to the full Nemotron model lineup, including Nemotron 3 Super 120B, without running the model locally. This is suitable when:

You'll need an NVIDIA developer account and API credentials to use this option.

Option C: Privacy Router (Mixed Mode)

NemoClaw's privacy router is one of its more practical enterprise features. It lets you define rules that determine whether a given request goes to a local model or a cloud model based on content classification. For example: internal business data stays local, general queries can use cloud inference.

Security Architecture

NemoClaw uses NVIDIA's OpenShell runtime to provide sandboxed execution. The security model is built on four policy-controlled layers:

These aren't optional add-ons — they're the default execution model. Every agent workload in NemoClaw runs inside the OpenShell sandbox. You configure the policies; the runtime enforces them.

Policy Presets

Rather than writing security policies from scratch, NemoClaw ships with ready-made policy presets for common deployment scenarios:

To apply a policy preset during configuration, select the preset that matches your deployment. Presets can be stacked and customized — you're not locked into using them exactly as shipped.

NemoClaw and OpenClaw

NemoClaw is designed to work alongside OpenClaw, NVIDIA's open-source AI agent framework. In a typical setup:

Together, they form a complete AI agent stack — OpenClaw manages what the agent does, NemoClaw controls how it does it securely. If you're already running OpenClaw agents and want to add Nemotron model support with enterprise security controls, NemoClaw is the recommended upgrade path.

License: NemoClaw is released under the Apache 2.0 license. You can use, modify, and deploy it commercially. The NVIDIA Nemotron models themselves may have separate licensing terms — check the model cards for specifics.

Known Limitations (Alpha)

As of March 16, 2026, NemoClaw is in alpha. A few things worth knowing before you deploy:

If you're evaluating NemoClaw for an enterprise deployment, plan for these constraints and factor in the time required to stay current with upstream changes.

Who Should Use NemoClaw?

NemoClaw is a good fit if you're:

It's a heavier stack than a minimal OpenClaw install — the security infrastructure and Docker dependencies add setup overhead. For teams without dedicated DevOps capacity, the managed setup route may be more practical.

Skip the setup. We deploy NemoClaw for you in 48h.
Get Deployed → Talk to us

Further Reading

Related Posts

Ready to automate your business?

We build custom AI agents for solopreneurs and small business owners. Book a free 15-minute call — no commitment.

Book a free call → ← More articles