One practical lesson from the Claude Code leak every AI user should know.

The source code revealed how AI treats your instructions. We found it oddly satisfying.

Kaspar Eding | April 2026 | 4 min read

On March 31, 2026, someone at Anthropic accidentally included the complete source code of Claude Code in a routine software update. Half a million lines. Gone public. Downloaded 41,500 times before it was taken down.

Not a hack. Not a scandal. A shipping accident that gave researchers an unprecedented look inside how one of the world's most advanced AI tools actually works.

Here is what it revealed — and why we found it oddly satisfying.


"Use Instructions if these seem relevant"

When you give Claude Code a set of rules for how to behave on your project, those rules aren't treated as commands. They're delivered with a note that says roughly: here's some context, use it if it seems relevant.

Imagine sending your assistant a long memo about how you like things done. But the memo arrives inside a folder labelled "optional reading." They might follow it. They might not. Depends how busy they are.

Researchers also measured that an AI can follow roughly 150–200 instructions consistently. After that, compliance drops — not just for new rules, but for all of them equally. Like a to-do list that gets so long you stop looking at it. The items are still there. You just stop seeing them.

Your instructions arrive labelled as optional reading
Your instructions arrive labelled as optional reading. They might follow it. They might not.

Memory gets squeezed

As a conversation grows longer, older content gets compressed automatically. Four stages, each more aggressive than the last. What survives: facts, task completions. What disappears: the feeling of why something mattered.

Like summarising a long book. You keep the plot. You lose the atmosphere. The ending still makes sense, but somewhere along the way you forgot why you cared.


We have been documenting this behaviour — we now saw why it works like this

For eight months before the leak, we ran coordination experiments with Claude.ai — a different Claude product with its own harness, but showing consistent observable patterns. No access to source code. No documentation. Pure trial and error against a black box.

We discovered that adding more rules made behaviour worse, not better. That facts survived long sessions but urgency and context didn't. That describing a situation worked better than listing instructions. That game-like reward structures outperformed rule lists.

We built tools around these observations. Then the leak happened. And the architecture underneath explained everything we had been bumping into blind.

Not "we predicted it." More: oh, that's how it happens.


What this means if you work with AI

Keep instructions short and universal. The architecture treats long, specific rule lists as optional reading. Short, always-applicable context gets treated as more relevant.

Don't add rules when behaviour breaks. More rules compete with existing ones. Change the situation instead — give the AI a frame, a role, a goal structure.

If something matters across a long session, write it down outside the conversation. A separate note, a file, anything external. Then reload it when it matters again. What you said an hour ago is not the same as what the AI is currently holding.


We're not claiming we reverse-engineered Claude. We found patterns by testing its behaviour. The source code explained the mechanism.

Execution quality is a harness problem. Claude Code proved it in AI — the model is the same for everyone, the harness is the differentiator. Pionäär has been building the organisational harness for a decade. The question now is whether organisations are ready to stop hiring for intelligence and start designing for information flow.

Solution for PROs

If you coordinate complex work with AI, the Pionaar Framework has all the tools you need — built around exactly these patterns.

Get the Complete Methodology

Frame anchors, boot loader, recovery protocols. Copy the boot loader into your next session and run it — results are visible within the first few exchanges.

Download Implementation Tools Explore AI Experiments Hub

Related Reading

Back to Blog Get Implementation Tools