This website uses cookies

Read our Privacy policy and Terms of use for more information.

Dear Negotiation Explorer,

Welcome to issue 37 of our NegoAI series.

Last week we talked about institutional memory — how structuring the MEMORY.md file turns a flat list into compounding knowledge. That file is one layer of how you feed context to your AI.

But memory is only half the picture.

When you open a Claude Project, you see two distinct boxes. One is called Custom Instructions. The other is called Project Knowledge. In Microsoft Copilot's agents, you see the same pair — an Instructions field and an attached knowledge section. Two boxes, side by side, in every serious AI copilot tool.

Most people fill one and leave the other empty. And they never ask why there are two.

The Half-Built Copilot

The most common failure mode I see is this: you build a Claude Project for your negotiation work. You spend time on the Custom Instructions — your role, your process, how you want the AI to think. Careful, considered, good.

You leave Project Knowledge empty.

Then you start working with it. The AI is professional. It structures well. It follows your instructions. But somehow, it sounds like an AI that could help any negotiator. Nothing it says is specific to your deals, your counterparts, your industry, your voice. It's you — minus everything that makes you, you.

The opposite failure happens too. Someone dumps twelve PDFs into Project Knowledge — frameworks, past deals, articles — and leaves Custom Instructions blank. Now the AI knows plenty, but it has no idea how you want it to think. It gives textbook answers built from your books. Knowledgeable, but unruly.

Both failures have the same root cause: the copilot has one drawer filled, not two.

The Split

Every AI copilot worth having has two distinct pieces of context, and they do two different jobs.

Knowledge Base (KB) is what the AI knows — the body of expertise it reasons from. For a negotiation copilot, this is the negotiation knowledge itself: the principles, methods, and patterns that shape every informed answer.

System Instructions (SI) is how the AI behaves — the role, process, rules, pushback style, and output format that shape the way it operates.

Different drawers. Different updates. Different jobs. Content versus conduct.

The reason this matters is simple: an AI that only has SI is a smart generic assistant. An AI that only has KB is a knowledgeable but shapeless one. A copilot that's actually yours needs both, deliberately separated.

Why the Knowledge Base Carries the Weight

If I had to pick which drawer matters more, I'd pick the Knowledge Base — and this is where most people underweight their effort.

Here's why. An AI model that hasn't been given specific knowledge falls back on what it learned during pretraining — the internet, books, generic content. Competent, but average. Every negotiator asking the same model the same question gets broadly the same answer. Nothing in that answer is yours. And in negotiation, generic advice is the worst kind of advice.

Let me show you how I've designed this with Deepak, the AI negotiation agent I've been building.

Deepak has two files behind him. One is the System Instructions file. The other is the Knowledge Base file. They are physically separate, they update at different cadences, and they do different jobs.

Deepak's SI defines how Deepak operates as a prep partner — the process to follow, when to challenge, when to push back, how to structure output, what kinds of answers I don't want (generic, textbook, confident without basis).

Deepak's KB is a compendium of negotiation knowledge drawn from the scholars I rely on — Deepak Malhotra, Thompson, and others. It's organized in two parts. The first covers the mechanics of negotiation: interests and positions, BATNA and reservation value, first offers and anchoring, concessions and haggling, value creation, leverage, standards, post-settlement. The second covers the people side: framing and perception, building trust, communication, managing emotions, cognitive and motivational biases, influence, dealing with difficult tactics and deception, long-term relationships. That's the body of negotiation thought I want Deepak to reason from.

And here's the piece that matters most — the design decision that separates Deepak from a generic AI copilot:

Deepak draws knowledge only from the Knowledge Base I provide. Not from general pretraining. Not from the model's internet-learned defaults.

If it's not in the KB, Deepak doesn't know it.

That constraint is the point. Not a limitation — the whole architecture.

When the AI is sealed to a KB I curate, two things happen. First, the output stops being generic. Deepak's recommendations draw from principles I trust, methods I've tested, patterns I've seen — not from whatever the internet averaged together during training. Second, the KB becomes a discipline. If Deepak gives a poor recommendation, I know where to look: the KB was missing something, or what's there isn't sharp enough. The problem is findable, fixable, and mine to improve.

But a sealed KB only works if the KB is built to be retrieved. Not dumped. Architected.

Deepak's KB is organized deliberately. Each idea is written as a self-contained chunk — 150 to 300 words — so Deepak can pull it without dragging context it doesn't need. The chunks sit in two parts (the mechanics and the people side), connected by strategic links — explicit cross-references that say "when this concept matters, also look here."

For example, the section on BATNA and reservation value has a strategic link to the biases chapter: before finalising your walk-away numbers, check them against motivational and cognitive biases. When Deepak reasons about BATNA, it follows that link automatically — pulling in bias-detection logic that might not otherwise surface.

Some ideas appear in more than one place on purpose, so the AI finds them regardless of how the question is asked. And there are several paths through the KB: by phase of the negotiation, by theme, by concept index.

That architecture is why a sealed KB produces sharp answers instead of flat ones. Contents alone aren't enough. The KB has to be built so the AI can navigate it the way you'd navigate your own thinking.

An AI that draws from everything is no one's AI. An AI sealed to a KB you curate — slowly, deliberately, over months — is yours.

What Goes in Each Drawer

Here's how I split it concretely for negotiation work. Use this as a map for your own copilot.

Knowledge Base (the negotiation compendium your AI reasons from):

  • The mechanics — interests and positions, BATNA and reservation value, first offers and anchoring, concessions and haggling, value creation (logrolling, integrative trades, contingency contracts), leverage, standards and norms, post-settlement.

  • The people side — framing and perception, building trust and rapport, communication (calibrated questions, tactical empathy), managing emotions, cognitive and motivational biases, strategies of influence, dealing with difficult people and tactics, lies and deception, long-term relationships.

  • The scholars whose thinking you want the AI to stand on — for me, Deepak Malhotra, Thompson, and a handful of others whose work I trust.

System Instructions (how the AI behaves):

  • Role: "You are my negotiation prep partner. Not a generic assistant."

  • Process: "Always check interests before positions. Challenge weak assumptions early."

  • Pushback style: "Push back on my logic when it's soft. Don't agree by default."

  • Output format: "Structure responses as Situation → Analysis → Recommendation. Give me options, not conclusions."

  • Guardrails: "Flag when you're outside your expertise. Flag when you need more context."

  • Decision logic: "When I ask for advice, give two or three options with trade-offs, not a single answer."

And here's the common mistake: people put behavior in the KB ("Remember to always push back on my reasoning") or put knowledge in the SI ("The difference between interests and positions is…"). Both wrong. Behavior belongs in SI, where it shapes how the AI operates. The negotiation content — methods, principles, patterns — belongs in KB, where the AI retrieves it when a question calls for it.

When you conflate them, the AI doesn't know which signals are rules and which are facts. It treats everything as roughly equal, and the result is — again — generic.

See It in the Tools You Already Have

Open a Claude Project right now. You'll see two boxes. Custom Instructions is drawer 2 — System Instructions. Project Knowledge is drawer 1 — Knowledge Base.

Open a Microsoft Copilot agent. Same pair. Instructions is drawer 2. Attached Knowledge sources — documents, SharePoint links, URLs — is drawer 1.

The architecture is already there, in whatever tool you use. What's been missing isn't the tool. It's knowing which box is which — and that both need to be filled, deliberately, with different kinds of content.

Two drawers. Content in one, conduct in the other. Sealed, curated, architected — that's the difference between a copilot that sounds like an AI and a copilot that sounds like yours.

Build It With Me — Live

If you're ready to stop reading about copilots and actually build one, I'm running two free Lightning Lessons in May, back-to-back.

Monday May 11 — Build Your AI Negotiation System: No Code, Any Tool. Sixty minutes, end to end. We'll take a live case and fill both drawers — Knowledge Base and System Instructions — in whichever tool you use (Claude, Copilot, or ChatGPT Projects). You'll leave with a working copilot, not a recording you mean to try later.

Monday May 18 — Build Your AI Negotiation Workflow: No Code. One week later, we extend. A single copilot is powerful. But a real deal needs a process — preparation, rehearsal, debrief — and sometimes several AI agents working together. LL2 is where the copilot you built in LL1 becomes a workflow you actually run.

The two sessions work together. LL1 builds the copilot. LL2 wires it into how you actually work.

If you want the full system — the copilot, the workflow, and the multi-agent architecture behind it, built with you over five weeks — cohort 2 of Build Your AI Negotiation System starts May 25. Details: https://maven.com/nego-ai/build-your-ai-negotiation-system

Questions? Reply directly — I read every response.


Reply

Avatar

or to participate

Keep Reading