ClawBeat
ClawBeat.co
Intel Feed Trace Magazine Guides Dossier Research Media Lab The Forge OpenClaw Chronicle Events // skill
New · March 16, 2026 GTC 2026 · Jensen Huang Keynote Open Source · Apache 2.0

Nvidia NemoClaw
The Enterprise OpenClaw Stack

NVIDIA's open-source AI agent platform — built on OpenClaw, hardened with the OpenShell sandbox, and bundled with Nemotron open models. Announced today at GTC 2026 in San Jose, NemoClaw targets the enterprise security problems that have kept OpenClaw out of production deployments.

By ClawBeat March 16, 2026 ~12 min read github.com/NVIDIA/NemoClaw ↗
License
Apache 2.0
Status
Early Access
Language
Python
Base
OpenClaw

// timeline

February 2026
OpenClaw acquired by OpenAI
OpenAI acquired OpenClaw from creator Peter Steinberger. Enterprise distrust grows over documented security vulnerabilities — ~900 malicious skills and ~135K exposed instances.
March 10, 2026
CNBC / Wired break the story
CNBC reports NVIDIA has been pitching "NemoClaw" to Salesforce, Cisco, Google, Adobe, and CrowdStrike — open source, contribution-for-early-access model.
March 16, 2026 — today
Official launch at GTC 2026 keynote
Jensen Huang announces NemoClaw stack publicly: NemoClaw runtime + OpenShell + AI-Q + Nemotron family. GitHub repo goes live under NVIDIA org, Apache 2.0.

// the_problem

OpenClaw became the fastest-growing consumer AI agent platform in history — but its viral growth surfaced a severe security picture that made enterprise adoption nearly impossible. When OpenAI acquired it in February 2026, the threat model was already well-documented: malicious skills in the public registry, agents with unconstrained file system and network access, and zero enforcement of privacy policy at the runtime level.

NemoClaw is NVIDIA's answer. Rather than build a competing platform from scratch, NVIDIA wrapped OpenClaw in an enterprise-grade security and privacy layer — the OpenShell runtime — and bundled it with open Nemotron models so enterprise teams can run agents locally without routing sensitive data through external cloud APIs.

Malicious Skills Registry
~900 malicious skills documented in the OpenClaw public skill registry. No sandboxing prevents skill code from accessing system resources or exfiltrating data.
Exposed Instances
~135,000 publicly exposed OpenClaw instances with no authentication. Researchers demonstrated remote agent hijacking via unauthenticated API endpoints.
Unconstrained File Access
Default OpenClaw agents run with full user-space file system permissions. No enforcement of data access policies or network egress rules at the runtime level.
Cloud Data Leakage
All OpenClaw requests route through external frontier model APIs. Sensitive enterprise data (contracts, code, internal docs) transits third-party infrastructure by design.
Early-stage caveat

NemoClaw itself is still early-stage with "rough edges" per NVIDIA's own GitHub documentation. OpenShell and the NemoClaw plugin API are under active development and subject to breaking changes. NVIDIA explicitly states it "should not yet be considered production-ready." This guide reflects facts confirmed as of March 16, 2026.

// architecture

NemoClaw is a three-component stack bundled under the NVIDIA Agent Toolkit umbrella. Each component addresses a distinct failure mode that makes enterprises reluctant to deploy autonomous agents.

OpenShell Runtime
Security & Privacy Layer
The sandboxed execution environment that wraps OpenClaw agents. Enforces network egress rules, data access policies, and model routing at the process level — not the application level.
Nemotron Models
Open Local Model Family
NVIDIA's open-weight model family, installable locally with a single command. Eliminates cloud dependency for sensitive workloads.
  • Nemotron 3 Ultra — frontier intelligence, NVFP4
  • Nemotron 3 Omni — multimodal (audio + vision + text)
  • 5× throughput on Blackwell vs. FP8 baseline
  • Compatible with OpenClaw ClawdBot model slot
  • Runs on RTX PCs, RTX PRO, DGX Station / Spark
AI-Q
Research Agent Blueprint
Open research agent blueprint for autonomous document and web research. Third component of the NVIDIA Agent Toolkit — pre-built, enterprise-tested agentic workflow.
  • Autonomous document analysis & summarization
  • Multi-step web research workflows
  • Runs inside OpenShell (sandboxed)
  • Reference implementation for Nemotron integration
  • Open source at github.com/NVIDIA/AgentIQ
How the stack flows

OpenClaw agent → receives user request → NemoClaw plugin intercepts execution → OpenShell evaluates against your privacy/network policy → routes to Nemotron (local) for sensitive tasks or frontier model via privacy router (cloud) for general tasks → agent output returned with full audit trail. Source: VentureBeat

// nemotron_models

Nemotron 3 Ultra
Flagship LLM
QuantizationNVFP4
PlatformBlackwell (RTX / DGX)
Throughput vs FP85× faster
LicenseOpen weights
ModalityText → Text
Nemotron 3 Omni
Multimodal
Audio, vision, and language understanding in a single model — enables NemoClaw agents to extract information from documents, images, and voice commands natively.
Input modalitiesAudio + Vision + Text
PlatformRTX PRO / DGX Spark
Use caseDocument + voice agents
LicenseOpen weights
FrameworkNeMo + NIM

// getting_started

NemoClaw installs as an OpenClaw plugin. The stack — OpenShell runtime + Nemotron models — is designed to deploy in a single command via the openclaw CLI. Per NVIDIA, it is early-stage and interfaces may change without notice.

Requirements

OpenClaw
Latest stable release
NemoClaw is a plugin — base OpenClaw install required
Python
3.10 or higher
NemoClaw uses Python — unlike OpenClaw's TypeScript stack
GPU (local models)
NVIDIA RTX or DGX
RTX 5090 / DGX-class for NVFP4 throughput; cloud-only mode: any hardware
VRAM
24 GB+ recommended
NVFP4 quantization reduces vs FP16 requirements; cloud mode: no GPU needed
shell · install nemoclaw
# Install NemoClaw plugin via the OpenClaw CLI
openclaw install nemoclaw

# With Nemotron 3 Ultra (requires NVIDIA GPU)
openclaw install nemoclaw --model nemotron-3-ultra

# With Nemotron 3 Omni (multimodal)
openclaw install nemoclaw --model nemotron-3-omni

# Cloud-only mode (no GPU required)
openclaw install nemoclaw --privacy-router cloud-only
shell · configure openShell policy
# Verify installation
openclaw nemoclaw status

# View current sandbox policy
openclaw nemoclaw policy show

# Block PII and financial data from cloud routing
openclaw nemoclaw policy set --block-cloud-for pii,financial,health

# List installed Nemotron models
openclaw nemoclaw models list

// nemoclaw_vs_openclaw

Dimension OpenClaw NemoClaw
Primary language TypeScript / Node.js Python + NeMo framework
Target audience Developers, consumers Enterprise, DevSecOps
Security model Permissive — application-level only OpenShell sandbox — runtime-level
Data privacy All requests route to cloud APIs Privacy router — local-first with cloud fallback
Bundled models Any (user-configured) Nemotron 3 Ultra + Omni, open weights
Skill registry safety ~900 malicious skills Allowlist + policy enforcement per skill
Messaging platforms WhatsApp, Telegram, Discord, iMessage Orchestration layer — not messaging-first
Hardware requirement Any (cloud models) NVIDIA GPU for local; cloud-only mode: any hardware
Production readiness Stable (post-OpenAI acquisition) Early access — not production-ready
License MIT Apache 2.0
Ownership OpenAI (acquired Feb 2026) NVIDIA (open community)
3rd-party security audit Microsoft advisory; DigitalOcean CVE list None yet — too new

// enterprise_context

NVIDIA positioned NemoClaw to five enterprise verticals before the GTC announcement: Salesforce (CRM and customer-facing AI), Cisco (infrastructure and networking), Google (cloud and AI platform), Adobe (creative and document workflows), and CrowdStrike (security — the most symbolically resonant partner given OpenClaw's track record).

Because the project is open source, the partnership model is contribution-based rather than licensed: early-access partners are expected to contribute code, resources, or integration work, not pay licensing fees. This aligns with NVIDIA's broader strategy of using open-source ecosystem positioning to drive hardware adoption — enterprises running NemoClaw at scale will overwhelmingly do so on NVIDIA GPU infrastructure.

The hardware-agnostic claim is a strategic gesture to lower adoption barriers. Real performance advantages are on Blackwell-class NVIDIA GPUs using NVFP4, where Nemotron 3 Ultra achieves the 5× throughput multiplier cited in the announcement.

// developer_faq

NemoClaw requires an existing OpenClaw installation — it is a plugin, not a replacement. Install OpenClaw first, then add NemoClaw via openclaw install nemoclaw. Per NVIDIA's GitHub README, the NemoClaw stack wraps OpenClaw's agent runtime with OpenShell; the underlying agent framework is unchanged.

This means full compatibility with the existing OpenClaw skill ecosystem is retained, but all skill executions are routed through the OpenShell sandbox where your policy configuration applies.

Yes — NemoClaw in cloud-only mode is hardware-agnostic and works on any machine that can run OpenClaw. In cloud mode, OpenShell's privacy router enforces data classification rules while all model calls route to external frontier models.

To use Nemotron local models (the core privacy value proposition), you need an NVIDIA GPU. Nemotron 3 Ultra targets Blackwell-class GPUs (RTX 5090, DGX Station) for the 5× NVFP4 throughput.

OpenShell is agent-specific sandboxing — it operates at the agent execution layer, not the process/OS layer. While you can run NemoClaw inside Docker, OpenShell intercepts individual tool calls (file reads, web fetches, shell commands) made by OpenClaw agents and evaluates each against policy before allowing or blocking it.

Key difference: OpenShell understands semantic data types (PII, financial data, code) and can block specific content from reaching cloud models even when the container itself has network access.

In principle yes — NemoClaw inherits OpenClaw's full skill ecosystem. In practice, OpenShell's default conservative policy will block or restrict skills that request broad permissions (file system access, network egress, shell execution) until you explicitly allowlist them.

NVIDIA ships NemoClaw with a default policy that blocks high-risk skill categories. Enterprise teams are expected to build curated allowlists appropriate to their use case — the right approach given the ~900 flagged malicious skills in the public OpenClaw registry.

The privacy router is OpenShell's model routing layer. It inspects each agent request and decides — based on your policy — whether the request should go to a local Nemotron model or exit to a cloud frontier model.

You configure data classification rules, e.g. --block-cloud-for pii,financial: any request containing PII or financial data routes to local Nemotron instead of OpenAI/Anthropic/Google. General requests can freely use cloud models. This hybrid local/cloud architecture is the core enterprise value proposition.

NeMo is NVIDIA's pre-existing generative AI training and inference framework (Apache 2.0, 13K+ GitHub stars). It provides model training infrastructure, NVFP4 quantization tooling, and serving optimizations for Nemotron.

NIM (NVIDIA Inference Microservices) is the production inference layer — containerized, OpenAI-API-compatible model endpoints optimized for NVIDIA hardware. Reference deployment via github.com/NVIDIA/nim-deploy.

NemoClaw ties these together at the agent layer: it installs Nemotron (trained via NeMo, served via NIM) and connects it to OpenClaw agents through OpenShell's privacy router. Think: NeMo = model infrastructure · NIM = inference serving · NemoClaw = agent integration + security.

NVIDIA has not given a timeline. Per the repository itself, it is "early-stage with rough edges" and "shared to gather feedback and enable early experimentation." The NemoClaw plugin CLI is under active development with breaking changes expected.

Given NVIDIA's enterprise software cadence and the five partner verticals it was pitched to, a stable release is likely timed to a partnership announcement. Watch github.com/NVIDIA/NemoClaw/releases and ClawBeat's homepage feed for updates.

NeMo Guardrails is a separate, older NVIDIA open-source project for adding safety and topical guardrails to LLM applications — it operates at the LLM output layer via programmable rules.

OpenShell in NemoClaw operates at the agent execution layer — controlling what tools/skills agents can invoke and what data can exit to cloud models. The two are complementary. NemoClaw may incorporate NeMo Guardrails as an additional safety layer, though this has not been confirmed in the launch materials.

// resources

Official Sources

Press Coverage