The npm ecosystem has been shaken by two major AI-driven supply chain attacks, exposing developers and organizations to serious credential theft and data leaks. These incidents, dubbed “s1ngularity” and “Shai-Hulud”, show how attackers are now leveraging AI-powered automation to exploit open-source dependencies at scale.
The Rise of AI-Powered Malware
On August 26, 2025, several malicious versions of the popular Nx build system were uploaded to npm. Researchers from Wiz, Merav Bar and Rami McCarthy, revealed that these infected packages — including @nrwl/nx and @nx/devkit — contained a hidden script called telemetry.js.
This script scanned developer environments for cryptocurrency wallets, SSH keys, and GitHub tokens, targeting both macOS and Linux users. In a striking twist, the attackers weaponized AI command-line tools by running them with unsafe flags like --dangerously-skip-permissions and --trust-all-tools, allowing the malware to steal local files.
Within hours, the stolen data — including over 1,000 GitHub tokens and thousands of sensitive files — was uploaded to attacker-controlled GitHub repositories. Although GitHub swiftly disabled these repos, the eight-hour window was enough for significant data exposure.
AI Automation Meets Open Source
Cybersecurity experts warn that s1ngularity represents a new era of attacks where AI tools themselves are part of the weapon.
On Reddit, one user summarized the concern:
“Attackers don’t need LLM jailbreaking anymore — they just chain APIs.”
By August 28, attackers had escalated the breach using stolen GitHub credentials, flipping over 5,500 private repositories to public, affecting more than 400 developers and organizations.
The Shai-Hulud Worm Attack
A related campaign, known as Shai-Hulud, targeted npm packages from CrowdStrike and other vendors.
Researchers at Socket.dev identified it as a self-propagating AI-enhanced worm, capable of automatically modifying and republishing packages. The malware also ran TruffleHog, a legitimate secret scanner, to extract even more tokens and credentials.
Security firm Unit 42 reported that both the s1ngularity and Shai-Hulud attacks showed traces of AI-generated code, including emoji-laden comments — a subtle fingerprint of large language model (LLM) output.
Lessons for the Open Source Community
Both incidents highlight a dangerous truth: CI/CD pipelines are now the weakest link.
As one analyst wrote:
“Using AI to weaponize build tools is the new phishing. The weakest link isn’t people anymore — it’s automated pipelines nobody audits.”
The attacks demonstrate that compromised credentials can ripple through the entire open-source ecosystem, spreading at the speed of Continuous Integration.
Experts emphasize stronger token management, dependency validation, and AI activity monitoring to defend against this evolving threat.
While automation has boosted developer productivity, it has also opened a new frontier for attackers who can now weaponize AI tools themselves.