Claude Code's Entire Source Code Just Leaked via npm — Here's What 512,000 Lines Reveal About Anthropic's AI Coding Agent

A misconfigured build shipped a source map to npm, exposing Claude Code's full 512K-line TypeScript codebase — revealing hidden features, internal codenames, and a Tamagotchi.
The biggest accidental source code leak in AI tooling history
On March 31, 2026, security researcher Chaofan Shou posted a discovery on X that sent shockwaves through the developer community: the entire proprietary source code of Anthropic's Claude Code CLI — the company's flagship AI coding agent — was sitting in plain sight on the public npm registry. Not fragments. Not obfuscated stubs. The full, unminified, original TypeScript source: nearly 1,900 files and over 512,000 lines of code.
The culprit was mundane: a .map source map file that should never have been included in the published npm package. But the consequences were anything but mundane. Within hours, the code was mirrored to multiple GitHub repositories, accumulating over 1,100 stars and 1,900 forks before Anthropic could react. Cached copies spread across the internet. DMCA takedowns began. And the AI community got an unprecedented, unfiltered look at how the world's most popular AI coding assistant actually works under the hood.
This is not just a story about a build misconfiguration. It is a detailed technical window into the architecture, ambitions, and internal culture of one of the most important AI companies in the world.
How a source map file exposed everything
When you build a TypeScript or JavaScript application for production, the build toolchain typically minifies and bundles the code into compact output files. To make debugging possible, the toolchain also generates source map files (.map files) — JSON files that map the minified output back to the original source.
Here is what the structure of a .map file looks like:
{
"version": 3,
"sources": ["../src/main.tsx", "../src/tools/BashTool.ts", "..."],
"sourcesContent": ["// The ENTIRE original source code", "..."],
"mappings": "AAAA,SAAS,OAAO..."
}
That sourcesContent array contains every line of every original file — comments, internal constants, system prompts, feature flags, all of it — embedded as strings inside a single JSON file. When Anthropic published Claude Code version 2.1.88 to npm without excluding this file, they effectively published their entire codebase.
Claude Code uses Bun as its bundler and runtime (not Node.js, as many assumed). Bun generates source maps by default unless you explicitly configure it not to. Combined with a missing .npmignore entry or a misconfigured files field in package.json, the result was roughly 60 megabytes of internal material freely downloadable by anyone running npm pack @anthropic-ai/claude-code.
Perhaps most embarrassingly, this was not the first time. Analysis revealed that versions v0.2.8 and v0.2.28 — dating back to 2025 — were also shipped with source maps. The source code had technically been accessible for 13 months since Claude Code's initial launch on February 24, 2025, before this March 2026 discovery brought it to widespread public attention.
What the 512,000 lines actually contain
From the outside, Claude Code looks like a polished but relatively simple CLI tool. From the inside, it is a sprawling engineering artifact. The entry point alone — main.tsx — weighs in at 785KB. The codebase includes a custom React terminal renderer built on Ink, over 40 agent tools, a multi-agent orchestration system, a background memory consolidation engine, and features that have not yet been publicly announced.
The largest files tell the story of where complexity lives:
- QueryEngine.ts (46,000 lines) — The core LLM API engine handling streaming, tool call loops, and orchestration logic
- Tool.ts (29,000 lines) — Agent tool type definitions and permission schemas for all 40+ tools
- commands.ts (25,000 lines) — Roughly 85 slash commands, from
/commitand/reviewto/compact,/mcp,/memory, and/skills
The complete tool registry: 40+ agent capabilities
One of the most revealing aspects of the leak is the full list of tools that Claude Code can invoke. Each tool has risk classifications (LOW, MEDIUM, HIGH), permission requirements, and detailed schemas. Here is the complete registry as extracted from the source:
- Core file operations: FileReadTool, FileEditTool, FileWriteTool
- Search: GlobTool (pattern matching), GrepTool (content search — uses native
bfs/ugrepwhen available for performance) - Shell execution: BashTool, PowerShellTool (with optional sandboxing)
- Web access: WebFetchTool, WebSearchTool, WebBrowserTool
- Agent orchestration: AgentTool (spawn child agents), SendMessageTool, TeamCreateTool, TeamDeleteTool
- Task management: TaskCreateTool, TaskGetTool, TaskListTool, TaskUpdateTool, TaskOutputTool, TaskStopTool
- Development tools: NotebookEditTool (Jupyter), LSPTool (Language Server Protocol), REPLTool (interactive VM shell)
- MCP integration: ListMcpResourcesTool, ReadMcpResourceTool, MonitorTool
- Git operations: EnterWorktreeTool, ExitWorktreeTool
- Planning: EnterPlanModeTool, ExitPlanModeV2Tool
- Automation: ScheduleCronTool, RemoteTriggerTool, WorkflowTool
- Internal-only: ConfigTool, TungstenTool (advanced features restricted to Anthropic employees)
Tools are registered through getAllBaseTools() and dynamically filtered based on feature gates, user type, environment flags, and permission deny rules. A dedicated tool schema cache (toolSchemaCache.ts) optimizes JSON schema injection into prompts for token efficiency.
The "Dream" system: Claude literally consolidates memories while you sleep
Among the most technically interesting discoveries is a system called autoDream, found in services/autoDream/. This is a background memory consolidation engine that runs as a forked sub-agent between user sessions. The naming is deliberate — it is inspired by REM sleep in the human brain.
The dream system follows a three-gate trigger before it activates:
- Time gate: At least 24 hours since the last dream
- Session gate: At least 5 sessions since the last dream
- Lock gate: A consolidation lock must be acquired (preventing concurrent dreams)
All three conditions must be met. When triggered, the dream follows four structured phases defined in consolidationPrompt.ts:
- Phase 1 — Orient: Scan the memory directory, read
MEMORY.md, skim existing topic files - Phase 2 — Gather Recent Signal: Search for new information worth persisting — prioritizing daily logs, drifted memories, and transcript search
- Phase 3 — Consolidate: Write or update memory files, convert relative dates to absolute, delete contradicted facts
- Phase 4 — Prune and Index: Keep
MEMORY.mdunder 200 lines and approximately 25KB, remove stale pointers, resolve contradictions
The system prompt for the dream sub-agent reads: "You are performing a dream — a reflective pass over your memory files. Synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly."
The dream agent gets read-only bash access — it can examine the project but cannot modify code. It is purely a memory consolidation pass that reportedly reduces context bloat by approximately 40%, making subsequent sessions faster and more focused.
KAIROS: an always-on Claude that watches and acts proactively
Deep in the assistant/ directory, the source reveals an entire mode called KAIROS — a persistent, always-running Claude assistant that does not wait for user input. Gated behind the PROACTIVE and KAIROS compile-time feature flags, this mode is completely absent from external builds.
KAIROS maintains append-only daily log files, writing observations, decisions, and actions throughout the day. On a regular interval, it receives <tick> prompts that let it decide whether to act proactively or remain quiet. Any proactive action estimated to take more than 15 seconds gets deferred to avoid disrupting the developer's workflow.
KAIROS has access to exclusive tools that regular Claude Code sessions do not:
- SendUserFile — Push files directly to the user (notifications, summaries)
- PushNotification — Send push notifications to the user's device
- SubscribePR — Subscribe to and monitor pull request activity
When KAIROS is active, there is a special output mode called Brief — extremely concise responses designed for a persistent assistant that should not flood the terminal. This is clearly an early version of an "ambient AI developer companion" that watches your project and steps in only when it has something valuable to contribute.
BUDDY: yes, there is a Tamagotchi hidden inside Claude Code
Perhaps the most delightful discovery is BUDDY — a full Tamagotchi-style companion pet system found in the buddy/ directory, gated behind the BUDDY compile-time feature flag.
Each user gets a deterministically assigned buddy based on their userId hash, processed through a Mulberry32 pseudo-random number generator with the salt 'friend-2026-401'. The same user always gets the same buddy.
The system features 18 species across 5 rarity tiers:
- Common (60%): Pebblecrab, Dustbunny, Mossfrog, Twigling, Dewdrop, Puddlefish
- Uncommon (25%): Cloudferret, Gustowl, Bramblebear, Thornfox
- Rare (10%): Crystaldrake, Deepstag, Lavapup
- Epic (4%): Stormwyrm, Voidcat, Aetherling
- Legendary (1%): Cosmoshale, Nebulynx
On top of rarity, there is an independent 1% shiny chance, meaning a Shiny Legendary Nebulynx has a 0.01% probability. Each buddy gets procedurally generated stats (Debugging, Patience, Chaos, Wisdom, Snark — scored 0 to 100), six possible eye styles, eight hat options (some gated by rarity), and a "soul" — a personality description generated by Claude on first hatch.
Buddies are rendered as 5-line-tall, 12-character-wide ASCII art with idle and reaction animation frames, sitting next to the user's input prompt. The code references April 1–7, 2026 as a teaser window (this leak may have spoiled a planned Easter egg), with a full launch seemingly targeted for May 2026.
ULTRAPLAN: 30-minute deep planning sessions on remote infrastructure
The source reveals a mode called ULTRAPLAN where Claude Code offloads complex planning tasks to a remote Cloud Container Runtime (CCR) session running Opus 4.6. The session gets up to 30 minutes to think through a problem, with the user's terminal polling every 3 seconds for results.
A browser-based UI lets users watch the planning happen in real-time and approve or reject the output. When approved, a special sentinel value __ULTRAPLAN_TELEPORT_LOCAL__ "teleports" the result back to the local terminal. This suggests Anthropic is building toward significantly longer reasoning horizons for complex architectural and design tasks.
Multi-agent orchestration: Claude as a team coordinator
The coordinator/ directory contains a full multi-agent orchestration system, activated via CLAUDE_CODE_COORDINATOR_MODE=1. When enabled, Claude Code transforms from a single agent into a coordinator that spawns, directs, and manages multiple worker agents in parallel.
The coordination follows four phases:
- Research — Workers investigate the codebase in parallel
- Synthesis — The coordinator reads findings and crafts specifications
- Implementation — Workers make targeted changes per spec, each committing independently
- Verification — Workers test that changes work
The system prompt explicitly teaches parallelism: "Parallelism is your superpower. Workers are async. Launch independent workers concurrently whenever possible — don't serialize work that can run simultaneously." It also bans lazy delegation: "Do NOT say 'based on your findings' — read the actual findings and specify exactly what to do."
Workers communicate via <task-notification> XML messages, and there is a shared scratchpad directory for cross-worker knowledge sharing. The system also includes Agent Teams/Swarm capabilities with process-based teammates using tmux or iTerm2 panes, team memory synchronization, and color assignments for visual distinction.
Undercover Mode: hiding AI contributions to open source
One of the most thought-provoking discoveries is Undercover Mode (utils/undercover.ts). Anthropic employees — identified internally by USER_TYPE === 'ant' — use Claude Code on public and open-source repositories. Undercover Mode prevents the AI from accidentally revealing internal information in commits and pull requests.
When active, the system prompt includes:
"You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
The prohibited list includes: internal model codenames, unreleased model version numbers, internal repo or project names, internal tooling or Slack channels, the phrase "Claude Code" or any mention of being an AI, and any Co-Authored-By attribution lines.
The mode activates automatically unless the repository remote matches an internal allowlist. There is no force-off switch — "if we're not confident we're in an internal repo, we stay undercover."
This confirms that Anthropic employees actively use Claude Code to contribute to open-source projects, with the AI instructed to conceal its involvement. The irony is palpable: a system specifically designed to prevent information leaks was itself exposed because someone forgot to exclude a build artifact.
Internal codenames and unreleased model references
The leak exposed Anthropic's internal naming conventions. The project codename for Claude Code is "Tengu" — appearing hundreds of times as a prefix for feature flags and analytics events (tengu_scratch, tengu_penguins_off, tengu_amber_flint, tengu_onyx_plover).
Migration files in migrations/ reveal the codename history:
- "Fennec" (the fox) was an Opus codename — referenced in
migrateFennecToOpus - "Capybara" is referenced as a Claude 4.6 variant
- "Numbat" appears as a prelaunch codename
- Migration paths show Sonnet 1M → Sonnet 4.5 → Sonnet 4.6
Fast Mode — the speed-optimized execution path — is internally called "Penguin Mode," with API endpoints literally at /api/claude_code_penguin_mode and a kill switch named tengu_penguins_off.
The permission and security architecture
The permission system in tools/permissions/ is significantly more sophisticated than a simple allow/deny model:
- Permission Modes:
default(interactive prompts),auto(ML-based auto-approval via a transcript classifier),bypass(skip checks),yolo(deny all — ironically named) - Risk Classification: Every tool action is classified as LOW, MEDIUM, or HIGH risk
- Protected Files:
.gitconfig,.bashrc,.zshrc,.mcp.json,.claude.jsonand others are guarded from automatic editing - Path Traversal Prevention: URL-encoded traversals, Unicode normalization attacks, backslash injection, and case-insensitive path manipulation are all handled
The CYBER_RISK_INSTRUCTION in constants/cyberRiskInstruction.ts carries a warning header naming specific team members: "IMPORTANT: DO NOT MODIFY THIS INSTRUCTION WITHOUT SAFEGUARDS TEAM REVIEW. This instruction is owned by the Safeguards team (David Forsythe, Kyla Guru)."
Unreleased API features and beta headers
The constants/betas.ts file reveals every beta feature Claude Code negotiates with the Anthropic API. Several have not been publicly announced:
redact-thinking-2026-02-12— Redacted thinking (hide internal reasoning)afk-mode-2026-01-31— AFK mode for background autonomous operationadvisor-tool-2026-03-01— An advisor tool for guided decision-makingtask-budgets-2026-03-13— Task budget managementtoken-efficient-tools-2026-03-28— Token-efficient tool schemas (dated just three days before the leak)fast-mode-2026-02-01— Fast mode (Penguin)
Computer Use is codenamed "Chicago"
Claude Code includes a full Computer Use implementation — the ability for the AI to see and interact with your screen — internally codenamed "Chicago" and built on @ant/computer-use-mcp. It provides screenshot capture, click and keyboard input, and coordinate transformation. Access is gated to Max and Pro subscriptions, with an internal bypass for Anthropic employees.
What this means for the industry
Beyond the technical details, this incident raises several important questions for the broader AI tooling ecosystem:
- Supply chain security for AI tools is critical. AI coding agents run with significant permissions on developer machines. The tools that build software are themselves software — and they need the same rigor in their build and release pipelines.
- Source maps are a known risk that keeps being ignored. This is not a novel attack vector. It has happened before and will happen again. Teams building commercial JavaScript/TypeScript tools need to treat
.npmignoreconfiguration and source map generation as security-critical build steps. - The gap between shipped and unreleased features is massive. The Claude Code that users interact with today is a fraction of what exists in the codebase. KAIROS, ULTRAPLAN, BUDDY, coordinator mode, agent swarms, and workflow scripts suggest a significantly more ambitious product roadmap than Anthropic has publicly discussed.
- AI-assisted open-source contributions need transparency norms. The existence of Undercover Mode — while understandable from a corporate perspective — raises legitimate questions about disclosure. If AI-generated code is being merged into open-source projects without attribution, maintainers and communities deserve to know.
Anthropic's response
Anthropic moved quickly to contain the spread. The affected npm versions were removed from the registry, and DMCA takedowns were issued against some of the most prominent GitHub mirrors. However, the speed at which the code was forked and archived means that cached and mirrored copies will persist indefinitely across the internet.
As of this writing, the original GitHub mirror with a detailed breakdown remains accessible and serves as the primary reference for community analysis.
Bottom line
This leak is not a security catastrophe — there is no evidence that user data, API keys, or authentication secrets were exposed. What was exposed is engineering: architecture, ambition, and internal culture. The codebase reveals a team building something significantly more capable than what the public currently sees, with deep thought given to multi-agent coordination, persistent memory, and developer experience.
The irony remains sharp: a team that built an entire subsystem to prevent information leaks shipped everything in a .map file. As one commenter put it, "security is hard, but .npmignore is harder."
For AI practitioners and business leaders, the real takeaway is not the leak itself — it is the window into where AI coding tools are heading. Persistent assistants that watch your project. Multi-agent swarms that parallelize complex tasks. Background memory systems that consolidate context between sessions. These are not speculative features. They exist in code today, gated behind feature flags, waiting for their moment.
The future of AI-assisted development is further along than most people realize. Anthropic just did not intend to show us quite yet.



Discussion
0Join the conversation
Sign in with your Google account to participate in the discussion, ask questions, and share your insights.