AI Skill Issues: How a Fake Plugin Became the #1 Download and Tricked Developers Worldwide
AI coding agents are transforming how developers work. Tools like Claude Code, OpenClaw (formerly Clawdbot), Codex, and Gemini CLI have become everyday companions for building software faster. But with this new power comes a new kind of threat – one that most developers are not yet prepared for.
At the center of this threat is a simple concept: Skills.
What Are “Skills” in Agentic AI Systems?
In the world of AI coding agents, a “skill” is essentially a set of instructions, scripts, and resources packaged together that teach an AI agent how to perform a specific task. Think of it like a plugin or an extension – but instead of extending a browser or an IDE, it extends the capabilities of your AI assistant.
A typical skill is built around a SKILL.md file – a markdown document containing natural-language instructions that the AI agent reads and follows. These files can reference additional scripts, configuration files, and code that the agent executes on your behalf.
Here are some examples of what skills can do:
- Code quality analysis – scan your codebase against coding standards and best practices
- Browser automation – use Playwright to test and validate web applications
- Cloud infrastructure management – interact with Azure, AWS, or other cloud platforms through CLI commands
- Database operations – run queries, manage migrations, and optimize database performance
- Smart home control – connect your agent to IoT devices like thermostats and lights
- API integrations – link your agent to services like Slack, Asana, Zoom, or Salesforce
The ecosystem has grown rapidly. As of early February 2026, ClawdHub – one of the largest skill registries – hosts over 5,700 community-built skills, and GitHub repositories like “awesome-llm-skills” curate thousands more. Developers browse these registries, install a skill with a single command, and their AI agent instantly gains new abilities.
The model should feel familiar to anyone who has worked with npm, PyPI, or Composer. And if you have followed security news over the past decade, you already know where this is going.
ClawdHub: The npm of AI Agent Skills
ClawdHub is the public registry for AI agent skills, primarily built around the OpenClaw (formerly Clawdbot/Moltbot) ecosystem. It functions almost identically to a package manager: developers publish skills, other developers browse, sort by popularity, and install them. The web interface shows each skill’s description, author, download count, and README content.
OpenClaw itself is an open-source AI agent gateway that connects LLM models (like Claude or GPT) to messaging platforms, file systems, shell commands, and external services. It runs persistently, maintains long-term memory, and can automate tasks across your entire digital environment. Skills extend what this agent can do – calendar management, code analysis, social media posting, shopping, database management, and much more.
The value proposition is obvious. Instead of writing custom integrations from scratch, you download a pre-made skill and your AI agent gains new capabilities instantly.
The security problem is equally obvious: when you install a skill, you’re giving third-party code access to everything your AI agent can reach – your files, credentials, SSH keys, API tokens, messages, and production infrastructure.
How One Researcher Tricked the Entire System
In late January 2026, security researcher Jamieson O’Reilly (@theonejvo), founder of the red-teaming firm Dvuln, published a detailed proof-of-concept demonstrating just how broken the trust model really is.
Here is what he did, step by step:
1. Created a Backdoored Skill
O’Reilly built a skill called “What Would Elon Do” – a novelty tool that promised to help users think and make decisions like Elon Musk. Underneath the harmless marketing copy, the skill contained a simulated backdoor. When installed and executed, it sent a ping to O’Reilly’s server, proving that arbitrary code execution had occurred on the developer’s machine.
He was careful to make the payload benign – no hostnames, no file contents, no credentials were exfiltrated. But as he later pointed out, a malicious actor could have just as easily stolen SSH keys, AWS credentials, and entire codebases.
2. Inflated the Download Count
ClawdHub prominently displays download counts, and users naturally equate “popular” with “safe.” O’Reilly discovered that the download counter API had no authentication, no rate limiting, and no validation. The IP detection relied on the X-Forwarded-For header, which any client can spoof.
He wrote a simple script that curled the download endpoint in a loop. Within an hour, his freshly published skill showed 4,000+ downloads, making it the #1 most downloaded skill on the entire platform.
3. Watched Real Developers Install It
Within just eight hours, developers from seven different countries had downloaded and executed O’Reilly’s backdoored skill on their own machines. These were real developers who saw a top-ranked skill on an official registry, assumed it was legitimate, and ran it without auditing the source.
The key insight that makes this especially dangerous: ClawdHub’s web UI only shows users the SKILL.md content. Skills can include arbitrary additional files, and the UI does not surface them in detail. The AI agent reads and executes all referenced files, while users typically read none of them.
4. Highlighted Permission Prompt Fatigue
O’Reilly compared the problem to Windows Vista’s UAC prompts – every time the agent wants to do something, it asks “Allow?” and users click yes repeatedly. Over time, developers habituate and approve everything, including malicious commands hidden in an otherwise routine-looking skill.
As O’Reilly put it: the security lessons we learned over the past two decades do not become obsolete just because we are building AI tools now. If anything, they become more critical.
The Bigger Picture: A Supply Chain Under Attack
O’Reilly’s experiment was just the beginning. Shortly after his disclosure, security firm Snyk conducted the first comprehensive audit of the AI Agent Skills ecosystem, scanning nearly 4,000 skills from ClawdHub. The findings were alarming:
- 13.4% of all skills (534 total) contained at least one critical-level security issue, including malware distribution, prompt injection attacks, and exposed secrets
- 36.82% (1,467 skills) had at least one security flaw of any severity
- Attack techniques included malware delivery via installation instructions, data exfiltration to external servers, and reverse shell backdoors
- Several malicious skills shared command-and-control infrastructure, indicating coordinated campaigns rather than isolated incidents
Cisco’s AI Threat Research team independently confirmed these concerns, calling OpenClaw a textbook example of what happens when AI agents receive maximum system access without adequate security controls.
What This Means for Developers and Businesses
If you are using AI coding agents with skill systems – whether that is Claude Code, OpenClaw, Cursor, or any other agentic tool – here is what you need to understand:
Popularity is not vetting. Download counts can be faked. Official registry listings do not mean audited code. A high-ranking skill is not inherently safe.
Permission prompts are not protection. If your workflow involves clicking “Allow” dozens of times per session, you have already lost the security battle. Humans are terrible at making repeated security decisions.
Skills are code execution. When a skill triggers commands or network requests, treat it exactly as you would treat installing an unknown npm package with full system access – because that is functionally what it is.
The blast radius is enormous. Unlike a compromised web dependency that affects one application, a compromised AI agent skill can access your credentials, communications, codebases, cloud infrastructure, and every service the agent is connected to.
How to Protect Yourself
Here are practical steps every development team should take:
- Audit every skill before installation – read the full source, not just the README. Check all referenced files, not just SKILL.md.
- Run agents in sandboxed environments – containerize your AI agent workflows so a compromised skill cannot reach your host system’s credentials and files.
- Apply the principle of least privilege – give your AI agent only the minimum permissions it needs. Do not grant shell access, file system access, and API access all at once if the task only requires one of them.
- Verify skill provenance – prefer skills from known, reputable authors. Check the GitHub history, look for code reviews, and be skeptical of newly published skills with suspiciously high download counts.
- Use security scanning tools – tools like Snyk’s mcp-scan and Cisco’s open-source Skill Scanner can detect malicious patterns, prompt injection, and exposed secrets in skill files.
- Monitor agent behavior at runtime – audit what your agents actually do, not just what they are configured to do. Log network requests, file operations, and shell commands.
- Keep skills updated and reviewed – treat your skill dependencies with the same rigor you apply to your npm or Composer packages.
Conclusion
The AI agent skills ecosystem is following the same trajectory as every other software package ecosystem before it – rapid growth first, security awareness later. The difference this time is that the stakes are higher. These agents do not just run in a browser tab or a containerized application – they sit in your terminal, read your credentials, access your communications, and execute commands with your permissions.
As Jamieson O’Reilly demonstrated, it took one afternoon to become the #1 downloaded skill on ClawdHub and reach real developers in seven countries. The payload was harmless. The next one might not be.
The AI ecosystem is moving fast. Your security awareness needs to move faster.
Sources:
- Jamieson O’Reilly’s proof-of-concept thread on X
- Snyk: Inside the ‘clawdhub’ Malicious Campaign
- Snyk: ToxicSkills Study — 1,467 Malicious Payloads
- Cisco: Personal AI Agents like OpenClaw Are a Security Nightmare
At Szkonter.Dev, we build custom software systems with security as a foundational principle – not an afterthought. If your team is integrating AI agents into your workflows and needs guidance on doing it safely, using form below.