Anthropic Claude code leaked
A major incident has unfolded involving Anthropic, where the source code for its Claude Code command-line application was unintentionally exposed. While the underlying AI models remain secure, the leak revealed the full implementation of the tool itself. This is significant because Claude Code has rapidly gained traction among developers, making its internal design highly valuable to both competitors and the broader tech community.
The issue originated from a packaging mistake during the release of version 2.1.88 of the Claude Code npm package. A source map file was mistakenly included, which allowed anyone to reconstruct the entire codebase. This exposed nearly 2,000 TypeScript files and over half a million lines of code. Security researcher Chaofan Shou was among the first to highlight the issue publicly, after which the code quickly spread across platforms like GitHub and was widely replicated.
Anthropic publicly acknowledged the mistake in a statement to VentureBeat and other outlets, which reads: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again."
Key learnings
1. What was it
Anthropic accidentally shipped their entire codebase inside a map file. Anyone who downloaded Claude Code version 2.18 had it. In the 196 files experts found these: (a) A fully built Tamagotchi-style companion pet with 18 species, rarity tiers, and unique stats - sitting behind a feature flag, ready to turn on; (b) "Kairos" - an always-on background agent that watches what you're doing and takes action without you asking; (c) "Coordinator mode" - one Claude instance spawning and managing multiple worker agents running in parallel; (d) "Undercover mode" - automatically scrubs AI model names from git logs when Anthropic employees contribute to public repos.
2. The irony
Anthropic built a whole system to prevent internal info from leaking into public repos, but then shipped their entire source code in a map file. It spread like wildfire on GitHub. Many say that the future of AI agents has already arrived!
3. Meet Chaofan Shou (@Fried_rice)
This researcher - on March 31, 2026 - discovered that Anthropic had accidentally leaked the entire source code of Claude Code via a JavaScript source map file in a public npm package. His tweet exposed the leak, simply stating "Claude code source code has been leaked via a map file in their npm registry!", with a direct download link racked up over 32 million views. He earned approximately $1.9 million in bug bounty rewards (including locked tokens) between 2020-2022 for discovering critical vulnerabilities across major platforms. The Anthropic leak remains one of the most significant source code exposures in recent AI industry history.
4. Anthropic uses its own model different from how we use it
Normal users write prompts (instructions) like text messages, while Anthropic itself writes them like engineering specs. When Anthropic gives Claude instructions internally, they don't write conversational prompts, but (as per the leaked code) every interaction is structured around three specific layers of constraint: (a) What tools to use ["Read this file using the file reader. Do not run any other commands."], (b) What risks to flag "If this action would delete data, stop and confirm first."; and (c) What the output should look like ["Give the conclusion first. Then explain your reasoning."]
For every user, Claude AI has to guess what the user actually means. These constraints remove guesswork, and force precision. The most useful prompt are always specific, not lengthy (as our experience teaches us).
5. Unique memory system being used
Claude Code has a memory system that stores notes about user's project, preferences, and past decisions. Each note is kept under 150 characters, and the entire memory file stays below 200 lines. When Claude needs more detail, it follows a pointer to a specific topic file and reads just that section, but it never loads everything at once. Users have a different habit thought: they paste documents, full conversations, long lists, into the prompt. Anthropic calls this problem "Context Entropy". AI starts losing track of what matters as context windows grow. So what to do? Compress aggressively in three levels of compression automatically as conversations go long. So for uses, best is to give a compressed summary of what matters.
6. How to actually make it work well
When Claude Code encounters a complex task, it splits itself, via a system called "Coordinator Mode." One version of Claude acts as the manager (breaks the task into pieces, assigns each piece to a separate worker and then reads all the results and synthesizes them). The leaked code shows the "coordinator prompt" ["Do NOT say 'based on your findings.' Read the actual findings and specify exactly what to do."] What's the lesson for users? Be precise when delegating, and in a complex project, don't try to do it all in one chat.
7. Background process "autoDream"
The leaked code shows that during idle time, Claude runs a special session that reviews everything it has learned about your project and cleans up its memory. The instruction is: "You are performing a dream, a reflective pass over your memory files. Synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly." That is in - ~ Claude Code consolidationPrompt.ts. Here, first, it scans what it already knows. Second, it checks for new information from recent sessions. Third, it merges updates, fixes contradictions, and converts vague references like "yesterday's bug" into specific ones like "the authentication bug found on March 28." Fourth, it prunes. Removes anything outdated. Keeps the index tight and under 200 lines.
8. Message for Claude users
If one studies the leaked code, one can reverse-engineer how Anthropic built their own system. Start every session with a compressed context block, not humongous text blocks. Use Constraint-First Prompting, and for every request, specify three things: what Claude should use (tools, files, references), what it should be careful about (risks, tone, boundaries), and what the output should look like (format, length, structure). Anthropic built 40+ individual tools, each designed to do exactly one thing (File reading, File editing, Searching). When we ask Claude to "research and write and edit and format," we're asking it to juggle, which is a bad idea. Since Claude Code has a dedicated "Plan Mode" for anything beyond simple tasks, so before writing a single line of code, it explores the problem, proposes an approach, and waits for approval. Users can do the same: "Don't write anything yet. First, outline your approach and let me review it." And then run a dream pass at the end of long sessions, asking it to consolidate what happened. Save the summary.
9. Dangerous!
Anthropic's own internal testing shows their latest model gets things wrong 29 to 30 percent of the time. So nearly one in three claims has an error! That's why Anthropic built verification into every layer of the architecture. When Claude gives you a number, a date, or a fact, we must check it, because even Anthropic doesn't trust it without verification.
