Dear Negotiation Explorer,
Welcome to issue 36 of our NegoAI series.
Last week, I walked you through the five context files that turn Claude from a tool into a thinking partner — CLAUDE.md, CLAUDE_CONTEXT.md, INTERACTION.md, MEMORY.md, USER.md. The workspace, end to end.
Today I want to open just one of them.
MEMORY.md is the file where the collaboration compounds. It's the only one of the five that grows with every session. For someone like Sarah, preparing for her negotiation with the consulting firm, it's the institutional memory of a single deal: every insight about the counterpart, every tactic she's refined, every assumption she's revised. For me, across twenty-six repositories — agent builds, courses, research, this newsletter — each MEMORY.md is the same thing at a different scale: the living record of how the work has evolved.
And like any living record, it needs maintenance. Not at the level of content — the content accumulates naturally. At the level of structure. Because how memory is organized turns out to matter as much as what's in it.
The Flat List Problem
A flat list grows into noise. The AI has to scan everything to find what's relevant, and with enough entries, relevance gets diluted. Compounding reverses. Instead of getting sharper, the collaboration gets a little slower every week.
That's the mechanical failure mode. The practical one is more specific.
In a flat list, all entries carry equal weight. An insight Sarah logged three weeks ago about the lead partner sits next to one she logged yesterday sits next to a note about the commercial terms. Nothing tells Claude which bucket each entry belongs to. Nothing tells Claude which entries have been refined since. Nothing tells Claude that a pattern about the counterpart's behavior also affects the internal briefing for Sarah's CFO.
So Claude does what any reader does with a long, flat list: it reads the whole thing, weighs each entry roughly equally, and picks what seems closest to the current question. Sometimes that's the right entry. Sometimes it's one that's been superseded. The AI isn't misbehaving. It's reading unstructured memory the only way unstructured memory can be read.
The improvement isn't to write less memory. It's to give the memory a shape. When that shape is right, the AI stops scanning and starts navigating — and the compounding I described last week actually happens.
Four structural rules do the work. I'll walk you through each one using Sarah's deal as the example.
Rule 1 — Rooms: Group by Topic, Not Just by Type
Sarah's MEMORY.md has four top-level categories: Decisions, Lessons, Patterns, Rejected Approaches. Useful, but one-dimensional. An insight about the lead partner's communication style and a decision about the commercial concession ladder sit next to each other, unrelated.
The improvement is a second dimension. Inside each category, entries get grouped under topic subheadings — what I call rooms:
## Decisions
### Counterpart Profile
- [2026-04-02] Lead partner's LinkedIn uses heavy
"stewardship" and "partnership" language. Kahneman flags
this as potential polished persona, not authentic
preference — test early with a data-based challenge.
- [2026-04-05] Procurement lead is anchor; partner
overrides on anything above 10% movement.
### Commercial Terms
- [2026-04-03] Walk-away threshold set at $16.2M after CFO
alignment. Board wants $17M floor; CFO will defend $16.2M.
- [2026-04-06] Fee structure: prefer fixed + performance
over pure fixed — aligns incentives, reduces scope creep.
### Internal Stakeholders
- [2026-04-01] CFO agreed to the mandate letter; board
review scheduled for post-deal.
### Tactics & Scenarios
- [2026-04-04] Opening: anchor high on scope, soften on
timeline. Tests whether partner reveals his real priority.
Same file. Same entries. But now when Claude helps Sarah rehearse for tomorrow's call, it goes straight to the Counterpart Profile room — not a generic scan of everything about the deal. What improves concretely: Claude's references become specific ("applying what we logged about the partner's overrides on the 10% threshold") instead of vague ("based on your prep notes"). Specificity in reference is specificity in reasoning.
The rooms are different for every deal. For a procurement professional, they might be Supplier Profile, Total Cost of Ownership, Internal Approvals, Negotiation Levers. For a cross-border negotiator, they might include Cultural Signals and Regulatory Constraints. The principle doesn't change: whatever the recurring kinds of judgment calls are, each gets its own room.
Rule 2 — Temporal Status: Mark What's Been Superseded
Decisions evolve as intelligence improves. Three weeks ago, Sarah's MEMORY.md said: "Procurement lead appears to be the decision-maker — focus pre-negotiation influence there." A week later, the counterpart memory she built from past deals flagged a pattern: "Partner overrides procurement on anything above 10% movement." The original decision wasn't wrong for what she knew then. It's wrong now.
If that original entry stays in the file with no marker, Claude has two contradictory signals and no way to know which one governs today. Sarah could walk into the room targeting the wrong person.
The improvement is a tag on the superseded entry:
- [2026-03-22] [Superseded → see 2026-04-05]
Procurement lead appears to be the decision-maker —
focus pre-negotiation influence there.
- [2026-04-05] Procurement lead is anchor; partner
overrides on anything above 10% movement. Brief the
internal team accordingly.
The older entry stays — sometimes you need to remember why you moved, especially if a new signal contradicts the new reading and the old one needs to come back on the table. But the tag tells Claude the decision isn't active.
What improves: when Sarah runs her rehearsal prompt the morning of the negotiation, Claude references the current read, not the outdated one. No drift. No targeting the wrong person because an old assumption was still visible.
For any negotiator: the opening position you held in January that shifted when the counterpart disclosed their existing commitments. The BATNA logic you revised after your CFO's pricing review. The tactic you used on the last deal with this firm that doesn't fit this one. Mark the old entry superseded with a date. Don't delete — the history is useful.
Rule 3 — Tunnels: Link What Shows Up in Multiple Places
Some insights aren't confined to one room — they reappear across several. The Kahneman flag about the lead partner's polished-persona risk lives naturally in the Counterpart Profile room. But it also directly informs the Tactics room (how Sarah will test authenticity in the first five minutes of the call) and the Internal Stakeholders room (how she'll brief her CFO about what to expect from his opening posture).
The improvement is a cross-reference tag:
### Counterpart Profile
- [2026-04-02] Kahneman flags polished-persona risk on
the lead partner's collaborative language.
[→ also: Tactics > Opening Test, Internal > CFO Briefing]
What improves: when Claude reads the entry, it knows where else the same thread lives. If Sarah asks Claude to draft her CFO briefing note, Claude picks up the persona-risk flag automatically — even though the flag lives in a different room. Cross-room patterns become visible without Sarah having to remember to connect them each time.
A second kind of tunnel points across deals. If this consulting firm uses a playbook Sarah has seen before, a tunnel in this deal's MEMORY.md can point to her counterpart memory from the earlier engagement: [→ also: past_deals/2025-Q3-advisory-firm/]. The knowledge compounds not just within the deal, but across her whole negotiating career.
Rule 4 — Contradiction Check: Maintain While You Write
The first three rules are structural. This one is a habit.
Every time Sarah logs a new entry, the rule is: scan the same room for entries that the new one supersedes or contradicts, and tag them right then. Not later. Not when a problem shows up. At the moment the new entry is created.
When she logged "Partner overrides procurement above 10% movement" on April 5, the same session should have tagged the earlier "procurement lead is the decision-maker" entry as superseded. One action, both entries updated. If she defers that to "next session," the memory starts carrying contradictions, and Claude starts drifting.
What improves: without this habit, the Superseded tag is passive — it only works if someone remembers to check. With it, the memory maintains itself as it grows. The structural rules stay honest over weeks and months, not just on the day they were set up.
This is the smallest rule, and the most important. Structured memory that's maintained in real time keeps improving. Structured memory that isn't maintained decays back into noise, no matter how well it started.
Why I Didn't Install the Research Tools
A short note, because I looked hard before settling on these four rules.
Two recent research systems caught my attention. MemPalace stores every conversation verbatim in a local vector database and retrieves on demand. Mem0 uses an LLM to extract and curate memories automatically. Both are serious, well-designed systems.
I didn't install either. MemPalace's strength — total recall — is also its weakness. Everything gets stored, relevant or not, and search results surface noise alongside signal. Mem0 is smart, but the LLM decides what to keep, what to update, what to delete. In negotiation work, where precision matters and the cost of a wrong reference can be a real deal, I want those decisions made by me, not delegated.
What I took from both was the structural vocabulary — rooms, temporal status, tunnels, contradiction checks. Four ideas. Zero dependencies. They turned a growing list into a working system, without adding infrastructure.
What This Means for You
You don't need Claude Code to apply this. You don't need any specific platform. If you keep any kind of persistent notes for your AI — project instructions, a reference document, a running file for a deal you're preparing — the same four rules apply.
Are your entries grouped by topic, or just dumped chronologically?
When a decision changes, do you mark the old version or just overwrite it?
When the same insight affects multiple parts of the deal, is the connection visible — or are you rediscovering it every time?
When you add something new, do you check whether it supersedes something old?
If most answers are no, the memory is there but the structure isn't. And the AI reads the memory the only way it can — as a flat list where everything matters equally.
This Week's Exercise (20 minutes)
Pick your most active deal, your next big negotiation, or a recurring AI workflow you return to — a prep routine, a client analysis you run often, a training template. Whatever notes you're keeping for your AI about it (project instructions, a running doc, the context you paste into sessions), open it.
Step 1 (5 minutes): Read it top to bottom. Notice whether any entry could be ambiguous or contradict another.
Step 2 (10 minutes): Add three topic subheadings — rooms — that reflect the work. For a deal: Counterpart, Commercial Terms, Internal Stakeholders, Tactics. For a recurring workflow: whatever the stable categories of judgment are. Regroup the existing entries underneath. Don't write new content. Just restructure.
Step 3 (5 minutes): Find one entry that's been superseded by a later decision — a read of the counterpart, a position, a tactic, a method. Tag the old entry with the date of the decision that replaced it. Don't delete.
Three small moves. You'll feel the difference in your next session.
The compounding doesn't come from the structure itself. It comes from the fact that structured memory stays useful as it grows. That's why every week, I refine a piece of this system — and why every week, the collaboration gets a little sharper.
The five context files make the AI informed. The structure inside those files is what makes the collaboration compound. It's worth the time to get it right — and the time to keep improving it.
If You Want to See This Come Together
Issue 11 showed you the five context files. Issue 12 opened the one where the collaboration compounds. The natural next question is what you actually build on top of them.
On May 11, I'm running a free Lightning Lesson — Build Your AI Negotiation Copilot: No Code, Any Tool — using Sarah's deal as the live case. Sixty minutes, end to end: the context files, the memory structure from this week, and how they come together into a working copilot you can use in any tool.
Save your seat here — it's free: https://maven.com/p/d82a4e/build-your-ai-negotiation-copilot-no-code-any-tool
If you want the full system — the copilot plus the multi-agent workflow behind it, built with you over five weeks — cohort 2 of Build Your AI Negotiation System starts May 25. Details: https://maven.com/nego-ai/build-your-ai-negotiation-system
Questions? Reply directly — I read every response.
