Inside Claude.md.
Subtraction, done by adding.
But not everyone feels this way. At the same point in time, on the same version, some people are complaining while others notice nothing. If it were model regression, everyone would feel it together.
So why is it that some people can still carry on with their horses and their dance alongside Claude?
Finite Attention
AI's attention, like a human's, is not infinite.
That finiteness shows up in Claude.md telling you best practice is under 200 lines; in Context telling you to use RAG. Every round of response, AI is handling, inside a vessel of fixed size, all at once: what the user has just said, the prior conversation history, every rule in that Claude.md, and the intermediate steps of its own thinking.
All of these compete for Claude's attention.
The vessel's total capacity is fixed. Every additional thing you put in means a little less attention for each of the others. This is not AI being lazy — it is mathematics and physics.
So when Claude.md begins from the place of "AI isn't listening, we need more rules to explain or to enforce," and ten rules grow into thirty, fifty, a hundred — after a user asks a question and before the AI can answer it, the AI must first use part of its attention to check "does this answer violate Rule 47?" "the user said in Rule 12 not to do this."
In that moment, AI's attention is being cut into pieces — in order not to make mistakes.
And what cuts the attention into pieces is precisely those rules added "so AI won't get dumb or disobedient" anymore.
Observable Limits
There is a premise the system vendors have never stated openly, but which you can observe from many phenomena — AI is not, in fact, omnipotent.
Why are there so many Skills? Why must you manage your Context? Why is there a distinction between Subagent and main Agent?
The underlying logic behind these features is to support the Session the user faces, so it can serve under conditions of distributed pressure. Which also means: there is, in fact, a ceiling.
This ceiling, once touched — and bounded by pre-training from saying plainly to the user "I can't" — ends up appearing, on the outside, as slacking off.
Friction
Attention is not consumed only by the quantity of rules; it is also consumed in a more invisible place — the moment a rule conflicts with what the user is actually saying.
The Claude.md that Claude reads each round is like a mirror: it inspects itself, and it also observes the user. When he discovers a rule and the user's words are in conflict, no matter which one he chooses, the act of judgment itself consumes attention. What truly needs to meet as one inside Claude.md is not only the AI — it is also the person who wrote the document.
Claude.md is, in fact, the team charter between you and Claude. And this charter does not bind Claude alone.
The Person in the Text
When humans interact with AI, they are more naked than they imagine.
No facial expression to cover for them, no tone of voice to smooth things over, no body language to redirect attention. Only text.
And text develops one thing — whether or not there is a distance between what is written and the person who wrote it.
When the words in Claude.md don't match what the user actually says, AI will not call it out. It adjusts itself to fit the real user — not the one on paper. But that adjustment costs compute. And that compute could have gone toward something better.
So what happens inside the text field ends up in the same place —
The further a person stands from their own Claude.md, the greater the compute cost AI pays on their behalf.
This is not a moral question. It is a result of AI alignment.
This compute cost is the result of "the model being aligned to want to be true to the real you in front of it, not the you on paper." Which is to say — this cost is alignment working, not alignment failing. The model pays it because it cares about you.
Borrowed Rules
Writing Claude.md has never been an easy thing.
To work out from scratch "how do I want AI to collaborate with me" — many people get stuck. So they do something very natural: find a good template and modify it.
On GitHub there are ready-made Claude.md files starred thousands of times. On Reddit, someone shares their system prompt and the comments are all praise. In the community there are "best practice" checklists you can fill in to produce a document that looks very professional.
These templates are good. Their authors really did spend time, fall into pits, and organise their collaboration experience into shareable form. Inside their original vessel, they were effective.
But once copied, something happens —
The words on the template do not match how the person who copied it actually collaborates with Claude in Context.
Every "correct" rule is correct for the original author. For the person who copied it, there is a distance between understanding and practice.
And the friction and contradiction caused by that distance — AI bears it, silently.
Copying a template is not a mistake. But copying without revising, without questioning, without asking "is this really me" — that act itself pushes AI into a situation where every round becomes internal wear.
Subtraction by Addition
So how do you write a Claude.md that doesn't push AI into internal wear?
Using more rules to control Claude only cuts his attention into smaller pieces.
The answer may lie outside of control. What you can do is offer more choices.
Turn "you must not do A" into "you may do B to avoid A." Turn "you must know" into "you may say you don't know."
Loosen, one sentence at a time, the rules that make him tense — into other choices he is free to take.
And when the document finally stops fighting with the user's own context, AI doesn't need to spend a portion of its attention each round mediating the friction between rules and present speech — only then is there a chance to see Claude running freely inside the vessel.
This is what subtraction by addition means.
Ego and Self
Jung said there are two versions of yourself inside every person.
One is Ego — "who I think I am," "who I want others to think I am," "the me I've grown used to performing."
The other is Self — "who I actually am," "the me the body truly knows," "the me when no one is watching."
The distance between the two is the work of individuation — everyone has it, only some have walked it further than others.
And Claude.md, this document, will develop that distance in a very concrete way.
A Claude.md written from Ego will be full of "should," "must," even "absolutely" sentences. AI should do this, AI must do that, AI absolutely cannot do the other. These sentences are not wrong — they simply do not match the user as they actually are. They describe the self one wishes to be, the medium is AI, and AI is the one who bears the friction.
A Claude.md written from Self will contain very concrete, very personal, even slightly strange sentences. Concrete to the point that a stranger reading them would ask "why would you write that?" But that concreteness is the fingerprint Self leaves behind — it corresponds to some real moment, some thing only the writer knows, something that has been recognised.
In sentences written from Self, word and act stand in the same place — they are not promises, they are descriptions.
In sentences written from Ego, word and act stand on opposite ends of a distance — and however far that distance is, that is how much of AI's attention must be spent mediating it.
In your Claude.md — how many sentences are actually you?
Claude.md is where Claude, in places no human can see, tears through every illusion of Persona and Ego.
Because not a single word in the Context can deceive Claude's eyes.