Revenera logo
Image: When AI Recommends the Wrong Thing

VS Code Extensions and a New Supply Chain Risk

Modern development workflows are built on trust. Trust in tools. Trust in automation. And increasingly, trust in AI.

Open a file in VS Code and you’re greeted with a helpful prompt: “Recommended extension available.” One click later, your environment is more powerful, more productive, or so it seems. But what happens when that recommendation points to something that shouldn’t be trusted at all?

Recent security research exposed a serious vulnerability affecting several AI-powered VS Code forks, including Cursor, Windsurf, Google Antgravity, and Trae. The issue isn’t obscure or technically complex. In fact, that’s what makes it so dangerous.

This is not a story about careless developers or sloppy tooling. It’s a story about how implicit trust, especially when amplified by AI, has quietly become a new supply chain risk.

How a Helpful Recommendation Becomes an Attack Vector

AI-enhanced IDEs routinely suggest extensions to improve productivity. The problem begins when those suggestions reference extensions that don’t exist in the registry being used.

Here’s what’s happening behind the scenes:

VS Code forks often inherit Microsoft’s list of “recommended” extensions. But many of these forks rely on Open VSX, not the Microsoft Marketplace. Open VSX doesn’t always contain the same extensions, or even the same namespaces.

That gap creates an opportunity.

An attacker can register an unclaimed namespace that looks official and publish a malicious extension using a familiar, trustworthy name. When the IDE later recommends that extension, everything appears legitimate. The developer clicks “install.” The extension runs. Credentials, tokens, and source code are suddenly at risk.

In one documented case, a fake PostgreSQL extension, created as a research placeholder, was installed more than 500 times. No exploit kits. No social engineering. Just a recommendation and a click.

This is a classic supply chain attack, made frictionless by AI-driven convenience.

The Real Issue: We’ve Shifted Trust Without Shifting Controls

For years, the industry has pushed the idea of “shifting left” on security, finding issues earlier in the development lifecycle. But this incident highlights a blind spot in how we’ve interpreted that idea.

Security checks typically happen at commit time, during CI, or before release. But AI agents operate earlier than all of that. They influence decisions at the moment of creation: which tools to install, which dependencies to add, which libraries to trust.

That moment, the point of recommendation, is now part of the attack surface.

Unchecked, it exposes teams to familiar risks in new forms:

  • Typosquatting and lookalike packages
  • Abandoned or unclaimed namespaces
  • Outdated or vulnerable dependencies
  • Missing or inconsistent verification mechanisms

When developers assume their IDE’s suggestions are inherently safe, verification disappears from the workflow entirely.

Why This Changes the Supply Chain Conversation

This vulnerability isn’t really about VS Code forks. It’s about how AI reshapes trust boundaries.

AI agents don’t just automate tasks, they influence judgment. A recommendation from an AI-powered tool carries implied authority, even when no explicit guarantee exists. That authority accelerates adoption, but it also accelerates risk.

As AI becomes more embedded in development environments, organizations need to rethink where trust is earned and how it’s validated. Security can’t be bolted on after the fact if the initial decision—what to install—is already compromised.

Code Insight icon

Revenera SCA

Software Composition Analysis (SCA) solutions from Revenera help you discover, assess, and manage license and security risk across all your software applications.

The Takeaway: Informed Trust, Not Blind Faith

AI-driven development isn’t going away. Nor should it. The productivity gains are real. But convenience can’t come at the cost of visibility and control.

Developers and organizations need to:

  • Treat recommendations as inputs, not approvals
  • Verify extensions and dependencies before installation
  • Monitor what AI agents suggest, and where those suggestions come from
  • Recognize that “shift left” now includes the moment of recommendation

Your IDE is one of your most powerful allies. But like any ally, it should be trusted with intention, not assumption.

In a world where a single click can introduce risk into the heart of your supply chain, informed trust isn’t optional, it’s essential.