NemoClaw Wants to Make OpenClaw Enterprise-Safe. Can It?

Nvidia's NemoClaw adds a security layer to OpenClaw. But when one in five ClawHub skills is malicious and the codebase has seven CVEs, is a runtime wrapper enough?

A Security Blanket for a Security Nightmare

Nvidia announced NemoClaw at GTC 2026 last weekend. The pitch: install one command on top of OpenClaw, get enterprise-grade security. Sandboxing, privacy routing, policy enforcement. Problem solved.

Except the problem runs deeper than one command can fix.

OpenClaw has become the default platform for AI agents. It also has 42,900 publicly exposed instances across 82 countries. Seven CVEs disclosed in early 2026 alone. And roughly 20% of all skills published to ClawHub are malicious.

NemoClaw is Nvidia's answer to all of this. But the answer arrives stamped "early-stage alpha" with a warning to "expect rough edges."

What NemoClaw Actually Does

The core piece is OpenShell, an open-source runtime that sandboxes agents at the kernel level. It sits beneath OpenClaw and enforces security policies on what agents can access and do.

Beyond sandboxing, NemoClaw includes a Privacy Router. It strips PII before data leaves for external services, using differential privacy tech Nvidia acquired from Gretel. The system works with any model provider: OpenAI, Anthropic, or Nvidia's own Nemotron.

Nvidia built this with OpenClaw creator Peter Steinberger. Adobe, Salesforce, SAP, CrowdStrike, and Dell are launch partners. The security integrations extend to Cisco, Google, and Microsoft Security.

On paper, it sounds comprehensive.

The Problem Beneath the Fix

OpenClaw is roughly 500,000 lines of code with 70+ dependencies and 53 configuration files. Its security track record reads like a horror story.

CVE-2026-25253 enabled one-click remote code execution via WebSocket token theft. Every version before January 29 was vulnerable. SecurityScorecard found those 42,900 exposed instances. 98.6% run on cloud platforms, not home networks. These are developer and enterprise deployments.

Then came ClawHavoc. Koi Security discovered 341 malicious skills in ClawHub, later rising to 1,184+. Of those, 335 installed Atomic Stealer malware on macOS via fake prerequisites. Bitdefender confirmed roughly 20% of all published skills were compromised.

Microsoft's security team flagged a "dual supply chain risk" where skills and external instructions converge in the same runtime. The agent runs with full host user privileges. No containerization. Secrets stored in plaintext.

An academic paper analyzed all of this and concluded bluntly: OpenClaw is "not recommended for enterprise deployment."

What NemoClaw Doesn't Fix

NemoClaw addresses runtime isolation. That matters. But security experts keep pointing at everything it leaves untouched.

Zahra Timsah, CEO of i-GENTIC AI, asks the key question: "Can you trust what they do when no one is watching?" She notes the platform still lacks observability, audit trails, and rollback capabilities.

Futurum Research analysts agree: "Security and accountability need to be embedded throughout the development lifecycle, not just at the runtime layer."

ClawHub's supply chain problem remains. NemoClaw sandboxes what agents do at runtime, but it doesn't verify what skills contain before they run. The marketplace that delivered 1,184 malicious packages is still the same marketplace.

Governance across systems? Missing. Cross-system reasoning trust? Unaddressed. The fundamental question of whether autonomous agents should make decisions without human oversight? Still open.

The NanoClaw Contrast

Consider NanoClaw, a minimalist alternative. About 500 lines of code versus OpenClaw's 500,000. It uses OS-level container isolation (Docker or Apple Container) rather than application-layer guardrails.

NanoClaw is structurally isolated regardless of LLM behavior. The philosophy is different: security through simplicity rather than security bolted onto complexity.

This contrast highlights the core tension. NemoClaw wraps a security layer around a platform with deep architectural problems. It is like adding a firewall to a house with no locks on the doors. The firewall helps. The doors are still unlocked.

The Strategic Angle

NemoClaw positions Nvidia as the infrastructure provider for the agentic AI era. Timsah noted developers will be attracted "not because it is better, but because it is faster on Nvidia hardware and easier if you are already in that ecosystem."

That is not a security argument. That is a platform play. Nvidia's own VP Kari Briski acknowledges agents remain "risky" with potential for "accessing sensitive data, misusing tools, or escalating privileges autonomously."

When the company building the security solution admits the underlying technology is risky, that is worth paying attention to.

Where This Leaves Enterprises

NemoClaw is a meaningful step. Runtime sandboxing is better than no sandboxing. Privacy routing is better than plaintext PII flying to external APIs. The launch partners suggest real enterprise interest.

But "early-stage alpha" and "enterprise-ready" are separated by a canyon. For organizations evaluating autonomous AI agents, the gap between those two phrases is where trust lives.

The smart move right now: watch NemoClaw's development closely, evaluate NanoClaw's simpler approach, and resist the urge to deploy autonomous agents faster than your security posture can support. The agents can wait. Your data cannot.

Sources