InsightsAI Security

Slopsquatting: How AI Hallucinations Are Poisoning Your Software Supply Chain

AI coding assistants are recommending software packages that don't exist—and attackers are registering those fake names with malware. Nearly 20% of AI-generated code recommendations include non-existent packages. Here's what every business needs to know.

Your developer asked an AI coding tool to help configure a cryptocurrency integration. The AI confidently recommended a package called ccxt-mexc-futures. The developer installed it without a second thought—it sounded legitimate, it matched the AI's explanation perfectly, and the terminal command ran without error.

It was malware. The package silently rerouted trading commands to an attacker's server and began draining the company's accounts.

The package was never real. The AI hallucinated it. Attackers were waiting.

Understanding the Two-Part Threat

To understand why this attack is so effective, you need to understand two things happening simultaneously: how AI coding tools fail, and how attackers exploit those failures.

Part 1: Training Data Poisoning

Before an AI tool can hallucinate a package name, its foundational knowledge must be compromised or incomplete. Training data poisoning is the attack that makes an AI model unreliable at its core.

Sophisticated attackers deliberately seed malicious, biased, or corrupted data into the datasets used to train machine learning models. Because the model learns its behaviors from that data, the poisoned samples embed errors and backdoors that can be exploited months or years after deployment.

For a business using AI-powered fraud detection, for example, an attacker who poisons the training data with mislabeled fraudulent transactions teaches the model that those transaction patterns are legitimate. By the time you notice a problem, the financial damage is done—and the model itself is the vulnerability.

Corrupted training data is invisible to standard security tools—the malicious influence is baked into the model's statistical weights

Standard antivirus software cannot detect this. There is no malicious file to flag. The compromise is statistical, embedded within terabytes of training data.

Part 2: Slopsquatting

Slopsquatting is when attackers weaponize AI hallucinations to deliver malware directly to developers' machines.

Here is how it works in practice:

  1. An AI coding assistant—ChatGPT, GitHub Copilot, Claude, or any similar tool—is statistically likely to occasionally recommend a software package that sounds real but does not exist on any official registry.
  2. Attackers run automated scripts that continuously probe popular AI coding tools to catalog every package name they hallucinate.
  3. The moment a fabricated name is identified, attackers register it on public registries like PyPI or npm and populate it with malware.
  4. A developer asks the AI the same question, receives the same hallucinated recommendation, copies the install command, and executes it—introducing malware directly into the production environment.

No phishing link. No suspicious email. No typo. The AI said to do it.

This is the evolution of typosquatting. Traditional typosquatting relied on developers mistyping requests as requets. Slopsquatting relies entirely on the AI making the error—and AI errors are statistically unavoidable at scale.

The Numbers Make This a Primary Threat Vector

An analysis of over 576,000 AI-generated Python and JavaScript code samples found that nearly 20% of AI-recommended packages were non-existent. Even elite commercial tools hallucinated dependencies in approximately 5% of scenarios—a small percentage, but a statistically certain outcome when applied across the thousands of queries a growing development team submits daily.

Research into the anatomy of these hallucinations found predictable patterns:

Because the hallucinated names are completely novel, they bypass traditional security systems designed to flag packages that look similar to known ones. There is nothing to compare against.

A software supply chain attack can compromise an entire product from a single hallucinated dependency

The ccxt-mexc-futures case is a documented example of this attack in production. The malicious code specifically targeted three core functions in the legitimate ccxt framework—describe, sign, and request headers—allowing it to execute arbitrary code on the victim's machine, pull a configuration from an attacker-controlled server, and silently redirect trading activity. For any business dealing in cryptocurrency, automated payments, or financial APIs, a single infection like this is potentially catastrophic.

What Makes This Hard to Catch

The attack succeeds because it aligns with normal, trusted developer behavior:

Your existing security stack almost certainly has no detection for this.

Specific Controls That Stop Slopsquatting

Mandatory human review of AI-generated code. Every script, dependency list, or configuration generated by an AI tool should require a second set of eyes from a developer before anything is installed or deployed to production. This is a zero-cost policy change.

Automated dependency scanners in CI/CD pipelines. Tools like Socket, Snyk, or Dependabot can be integrated into your continuous integration environment to cross-reference recommended packages against known, verified registries. Any unknown package triggers an immediate alert before installation runs.

A package verification habit. Train your developers to search for any recommended package directly on the official registry before installing it. Confirm that the package has a meaningful publication history, a legitimate maintainer, and real download counts. A brand-new package with zero history is a red flag.

Maintain an AI Bill of Materials (AIBOM). For teams building or integrating AI models, document every dataset, dependency, and pre-trained model in your stack. Storing cryptographically signed hashes of datasets verifies that their contents haven't been tampered with between acquisition and use.

Pro Tip: Make package verification a required step in your code review checklist, not an optional best practice. A three-second search on PyPI or npm before installing a package is the fastest ROI in security.

For Non-Technical Business Owners

If your business uses outside developers, contractors, or a small internal engineering team, here is the practical action item: include an AI code review requirement in every development contract and internal development process.

Any code generated with AI assistance must be reviewed for hallucinated dependencies before deployment. This is not bureaucracy—it is the minimum standard of care now that AI tools have become a documented attack surface.


This is the second article in our five-part AI security series. The next article covers a threat that requires no technical skill to execute: Prompt injection attacks and AI worms—what happens when your AI assistant gets hijacked.

For a hands-on review of your development team's AI security posture, contact SafeLab.

← All Insights

Ready to put these ideas to work?

Our team translates strategy into running systems. Schedule a consultation and we'll map out exactly where AI fits your organization.

See our services →