Nvidia NemoClaw
The Enterprise OpenClaw Stack
NVIDIA's open-source AI agent platform — built on OpenClaw, hardened with the OpenShell sandbox, and bundled with Nemotron open models. Announced today at GTC 2026 in San Jose, NemoClaw targets the enterprise security problems that have kept OpenClaw out of production deployments.
// timeline
// the_problem
OpenClaw became the fastest-growing consumer AI agent platform in history — but its viral growth surfaced a severe security picture that made enterprise adoption nearly impossible. When OpenAI acquired it in February 2026, the threat model was already well-documented: malicious skills in the public registry, agents with unconstrained file system and network access, and zero enforcement of privacy policy at the runtime level.
NemoClaw is NVIDIA's answer. Rather than build a competing platform from scratch, NVIDIA wrapped OpenClaw in an enterprise-grade security and privacy layer — the OpenShell runtime — and bundled it with open Nemotron models so enterprise teams can run agents locally without routing sensitive data through external cloud APIs.
NemoClaw itself is still early-stage with "rough edges" per NVIDIA's own GitHub documentation. OpenShell and the NemoClaw plugin API are under active development and subject to breaking changes. NVIDIA explicitly states it "should not yet be considered production-ready." This guide reflects facts confirmed as of March 16, 2026.
// architecture
NemoClaw is a three-component stack bundled under the NVIDIA Agent Toolkit umbrella. Each component addresses a distinct failure mode that makes enterprises reluctant to deploy autonomous agents.
- Isolated sandbox for each agent execution
- Policy-based network and filesystem controls
- Privacy router: local vs. cloud model decisions
- Skill allowlist / blocklist enforcement
- Open source (Apache 2.0), part of Agent Toolkit
- Nemotron 3 Ultra — frontier intelligence, NVFP4
- Nemotron 3 Omni — multimodal (audio + vision + text)
- 5× throughput on Blackwell vs. FP8 baseline
- Compatible with OpenClaw ClawdBot model slot
- Runs on RTX PCs, RTX PRO, DGX Station / Spark
- Autonomous document analysis & summarization
- Multi-step web research workflows
- Runs inside OpenShell (sandboxed)
- Reference implementation for Nemotron integration
- Open source at github.com/NVIDIA/AgentIQ
OpenClaw agent → receives user request → NemoClaw plugin intercepts execution → OpenShell evaluates against your privacy/network policy → routes to Nemotron (local) for sensitive tasks or frontier model via privacy router (cloud) for general tasks → agent output returned with full audit trail. Source: VentureBeat
// nemotron_models
// getting_started
NemoClaw installs as an OpenClaw plugin.
The stack — OpenShell runtime + Nemotron models — is designed to deploy in a single command via the openclaw CLI.
Per NVIDIA, it is early-stage and interfaces may change without notice.
Requirements
# Install NemoClaw plugin via the OpenClaw CLI openclaw install nemoclaw # With Nemotron 3 Ultra (requires NVIDIA GPU) openclaw install nemoclaw --model nemotron-3-ultra # With Nemotron 3 Omni (multimodal) openclaw install nemoclaw --model nemotron-3-omni # Cloud-only mode (no GPU required) openclaw install nemoclaw --privacy-router cloud-only
# Verify installation openclaw nemoclaw status # View current sandbox policy openclaw nemoclaw policy show # Block PII and financial data from cloud routing openclaw nemoclaw policy set --block-cloud-for pii,financial,health # List installed Nemotron models openclaw nemoclaw models list
// nemoclaw_vs_openclaw
| Dimension | OpenClaw | NemoClaw |
|---|---|---|
| Primary language | TypeScript / Node.js | Python + NeMo framework ↗ |
| Target audience | Developers, consumers | Enterprise, DevSecOps ↗ |
| Security model | Permissive — application-level only | OpenShell sandbox — runtime-level ↗ |
| Data privacy | All requests route to cloud APIs | Privacy router — local-first with cloud fallback ↗ |
| Bundled models | Any (user-configured) | Nemotron 3 Ultra + Omni, open weights ↗ |
| Skill registry safety | ~900 malicious skills ↗ | Allowlist + policy enforcement per skill |
| Messaging platforms | WhatsApp, Telegram, Discord, iMessage | Orchestration layer — not messaging-first ↗ |
| Hardware requirement | Any (cloud models) | NVIDIA GPU for local; cloud-only mode: any hardware ↗ |
| Production readiness | Stable (post-OpenAI acquisition) | Early access — not production-ready ↗ |
| License | MIT | Apache 2.0 ↗ |
| Ownership | OpenAI (acquired Feb 2026) | NVIDIA (open community) ↗ |
| 3rd-party security audit | Microsoft advisory; DigitalOcean CVE list | None yet — too new |
// enterprise_context
NVIDIA positioned NemoClaw to five enterprise verticals before the GTC announcement: Salesforce (CRM and customer-facing AI), Cisco (infrastructure and networking), Google (cloud and AI platform), Adobe (creative and document workflows), and CrowdStrike (security — the most symbolically resonant partner given OpenClaw's track record).
Because the project is open source, the partnership model is contribution-based rather than licensed: early-access partners are expected to contribute code, resources, or integration work, not pay licensing fees. This aligns with NVIDIA's broader strategy of using open-source ecosystem positioning to drive hardware adoption — enterprises running NemoClaw at scale will overwhelmingly do so on NVIDIA GPU infrastructure.
The hardware-agnostic claim is a strategic gesture to lower adoption barriers. Real performance advantages are on Blackwell-class NVIDIA GPUs using NVFP4, where Nemotron 3 Ultra achieves the 5× throughput multiplier cited in the announcement.
// developer_faq
NemoClaw requires an existing OpenClaw installation — it is a plugin, not a replacement. Install OpenClaw first, then add NemoClaw via openclaw install nemoclaw. Per NVIDIA's GitHub README, the NemoClaw stack wraps OpenClaw's agent runtime with OpenShell; the underlying agent framework is unchanged.
This means full compatibility with the existing OpenClaw skill ecosystem is retained, but all skill executions are routed through the OpenShell sandbox where your policy configuration applies.
Yes — NemoClaw in cloud-only mode is hardware-agnostic and works on any machine that can run OpenClaw. In cloud mode, OpenShell's privacy router enforces data classification rules while all model calls route to external frontier models.
To use Nemotron local models (the core privacy value proposition), you need an NVIDIA GPU. Nemotron 3 Ultra targets Blackwell-class GPUs (RTX 5090, DGX Station) for the 5× NVFP4 throughput.
OpenShell is agent-specific sandboxing — it operates at the agent execution layer, not the process/OS layer. While you can run NemoClaw inside Docker, OpenShell intercepts individual tool calls (file reads, web fetches, shell commands) made by OpenClaw agents and evaluates each against policy before allowing or blocking it.
Key difference: OpenShell understands semantic data types (PII, financial data, code) and can block specific content from reaching cloud models even when the container itself has network access.
In principle yes — NemoClaw inherits OpenClaw's full skill ecosystem. In practice, OpenShell's default conservative policy will block or restrict skills that request broad permissions (file system access, network egress, shell execution) until you explicitly allowlist them.
NVIDIA ships NemoClaw with a default policy that blocks high-risk skill categories. Enterprise teams are expected to build curated allowlists appropriate to their use case — the right approach given the ~900 flagged malicious skills in the public OpenClaw registry.
The privacy router is OpenShell's model routing layer. It inspects each agent request and decides — based on your policy — whether the request should go to a local Nemotron model or exit to a cloud frontier model.
You configure data classification rules, e.g. --block-cloud-for pii,financial: any request containing PII or financial data routes to local Nemotron instead of OpenAI/Anthropic/Google. General requests can freely use cloud models. This hybrid local/cloud architecture is the core enterprise value proposition.
NeMo is NVIDIA's pre-existing generative AI training and inference framework (Apache 2.0, 13K+ GitHub stars). It provides model training infrastructure, NVFP4 quantization tooling, and serving optimizations for Nemotron.
NIM (NVIDIA Inference Microservices) is the production inference layer — containerized, OpenAI-API-compatible model endpoints optimized for NVIDIA hardware. Reference deployment via github.com/NVIDIA/nim-deploy.
NemoClaw ties these together at the agent layer: it installs Nemotron (trained via NeMo, served via NIM) and connects it to OpenClaw agents through OpenShell's privacy router. Think: NeMo = model infrastructure · NIM = inference serving · NemoClaw = agent integration + security.
NVIDIA has not given a timeline. Per the repository itself, it is "early-stage with rough edges" and "shared to gather feedback and enable early experimentation." The NemoClaw plugin CLI is under active development with breaking changes expected.
Given NVIDIA's enterprise software cadence and the five partner verticals it was pitched to, a stable release is likely timed to a partnership announcement. Watch github.com/NVIDIA/NemoClaw/releases and ClawBeat's homepage feed for updates.
NeMo Guardrails is a separate, older NVIDIA open-source project for adding safety and topical guardrails to LLM applications — it operates at the LLM output layer via programmable rules.
OpenShell in NemoClaw operates at the agent execution layer — controlling what tools/skills agents can invoke and what data can exit to cloud models. The two are complementary. NemoClaw may incorporate NeMo Guardrails as an additional safety layer, though this has not been confirmed in the launch materials.