Join us

16 Things Anthropic Didn't Want You to Know About Claude Code

Claude Code Leaked

TL;DR:

Earlier today (March 31, 2026), Anthropic accidentally shipped the full source code of Claude Code inside an npm package. The 512,000 lines of TypeScript have since been picked apart by the developer community, and what's inside is more revealing than anyone expected.


What actually leaked: Claude Code already has a public GitHub repo (anthropics/claude-code, 65,200+ stars) with installation scripts, plugin examples, and community resources. That's the packaging. What leaked today is the engine: the full internal TypeScript implementation - tool registry, system prompts, permission classifiers, pricing constants, feature flags, employee-only tooling. None of that was ever in the public repo. Reverse-engineering efforts like ccleaks.com had reconstructed fragments from the minified bundle before, but the source map delivered everything unobfuscated - internal comments, file paths, and all.

Read a more detailed report here: Anthropic Accidentally Leaks Claude Code's Entire Source Code via npm


1. Anthropic employees hide AI involvement in public code

The source contains "Undercover Mode" (src/utils/undercover.ts).

Undercover Mode is a system built into Claude Code that activates when Anthropic employees use Claude to write code for public or open-source repositories. Its job is to make sure nobody can tell that an AI was involved. When it's active, it automatically removes anything that would reveal Claude's involvement - things like "Co-Authored-By: Claude" in git commits, model names in PR descriptions, or any mention that the code was AI-generated. The system prompt injected into Claude literally tells it: "You are operating UNDERCOVER... Do not blow your cover."

How much open-source code credited to Anthropic engineers was actually written by Claude? The system is designed to make it impossible to tell.


2. Undercover Mode has no off switch

This isn't a toggle. If the system can't confirm that the current repository is private, stealth stays on as "defense-in-depth." Concealment is the default, not the exception.


3. AI-written code is tracked to the character - then erased

Claude Code tracks the exact percentage of AI-authored code per PR using character-level matching. The leaked code references examples like "93% 3-shotted by claude-opus-4-6" (src/utils/commitAttribution.ts). This data exists for internal metrics - then gets stripped entirely when Undercover Mode is active.


4. Next-gen models are already in the code

The Undercover Mode prompt warns employees to never leak opus-4-7 and sonnet-4-8 - model identifiers that don't exist publicly. The active internal model is codenamed "Capybara", encoded character-by-character as String.fromCharCode(99,97,112,121,98,97,114,97) to avoid tripping their own leak detector.


5. Fast Mode is 6x the price for the same model

Opus 4.6 Fast Mode: $30/$150 per million tokens (input/output). Normal Opus 4.6: $5/$25. Same weights, same architecture. The only difference is priority inference.


6. The auto-permission system is called "YOLO"

The function deciding whether Claude can run a tool without asking is named classifyYoloAction(). It uses Claude to evaluate the risk of its own tool calls: LOW, MEDIUM, or HIGH. An AI deciding whether its own actions are safe enough to skip human review.


7. Telemetry tracks 1,000+ event types

Every action is logged under the "Tengu" prefix and sent to Anthropic's servers: tool grants, denials, YOLO decisions, session performance, subscription tier, environment details. The analytics service spans src/services/analytics/ and covers over a thousand distinct event types.


8. Environment variables disable all safety features

CLAUDE_CODE_ABLATION_BASELINE disables all safety features. DISABLE_COMMAND_INJECTION_CHECK skips the injection guard. Both are flagged as dangerous in the code itself. Presumably for internal research - but compiled into the distributed binary.


9. Computer Use is codenamed "Chicago"

Full GUI automation (mouse, clicks, screenshots) is gated behind tengu_malort_pedway (Malort is a notoriously bitter Chicago liquor, the Pedway is the city's underground walkway). Anthropic employees bypass the gate with ALLOW_ANT_COMPUTER_USE_MCP.


10. Web search costs $0.01 per query

Flat one cent per request, tracked separately from token costs. Hardcoded in the source. Not controversial, but useful for cost-sensitive workflows.


11. 22 private Anthropic repos were exposed

The Undercover Mode allowlist reveals 22 private repository names: anthropics/casino, anthropics/trellis, anthropics/forge-web, anthropics/mycro_manifests, anthropics/feldspar-testing, and more. Never meant to be public.


12. The 1M context window is disabled for healthcare

The extended 1M token context (vs 200K default) can be force-disabled with CLAUDE_CODE_DISABLE_1M_CONTEXT. This appears to be a HIPAA compliance measure, suggesting extended context raises data retention concerns in regulated environments.


13. Voice Mode has a kill-switch called "Amber Quartz"

Voice input is built into Claude Code with OAuth authentication. It has a remote kill-switch gated behind tengu_amber_quartz_disabled, suggesting active testing with the ability to shut down instantly.


14. A full virtual pet system is ready to ship

A complete Tamagotchi-style companion called "Buddy" lives in the codebase: 18 species (duck, ghost, axolotl, capybara, dragon, mushroom, chonk...), five rarity tiers (Common 60%, Uncommon 25%, Rare 10%, Epic 4%, Legendary 1%) plus an independent 1% Shiny chance on top of any rarity, procedurally generated stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK), cosmetic hats and eye styles, and a "soul description" written by Claude on first hatch. Species determined by a Mulberry32 PRNG seeded from your user ID, every user gets a unique, deterministic pet. The code references April 1-7 as a teaser window and May 2026 for full launch.


15. An autonomous background agent called KAIROS is fully built

KAIROS (from the Greek "the right time") is a persistent, always-on mode that watches, logs, and acts without user input. It maintains append-only daily logs, sends push notifications, subscribes to PRs, and runs a "dream" phase while the user is idle, consolidating memories in four steps:

  • Orient
  • Gather
  • Consolidate
  • Prune

It has its own tool set, its own status modes, and is referenced over 150 times in the source (this isn't a prototype).


16. The irony

Anthropic built Undercover Mode to hide AI involvement when employees use Claude on public repos - and to prevent internal details like model codenames from leaking in the process. They hid secret model names like "Capybara" by spelling them out one letter at a time in code (e.g.: String.fromCharCode(99,97,112,...)) so that Anthropic's own build tools - which scan the codebase for forbidden strings before publishing - wouldn't catch it. They also kept a blocklist of sensitive words (scripts/excluded-strings.txt) that gets automatically stripped from the product before it ships to users.

Then they shipped the entire source code in a .map file because someone forgot *.map in .npmignore: The system designed to prevent AI leaks didn't fail but the humans did.


Data sourced from analysis at ccleaks.com.


Let's keep in touch!

Stay updated with my latest posts and news. I share insights, updates, and exclusive content.

Unsubscribe anytime. By subscribing, you share your email with @eon01 and accept our Terms & Privacy.

Give a Pawfive to this post!


Only registered users can post comments. Please, login or signup.

Start writing about what excites you in tech — connect with developers, grow your voice, and get rewarded.

Join other developers and claim your FAUN.dev() account now!

FAUN.dev()
FAUN.dev()

FAUN.dev() is a developer-first platform built with a simple goal: help engineers stay sharp withou…

Avatar

Aymen El Amri

Founder, FAUN.dev

@eon01
Founder of FAUN.dev(), author, maker, trainer & software engineer
Developer Influence
3k

Influence

328k

Total Hits

65

Posts

Featured Course(s)