On Monday — in the first of two Maven Lightning Lessons — I built a working AI negotiation prep agent live, from zero, and I built it twice: once in Microsoft Copilot, once in a Claude Project, on the same case. Here's what each one produced, the one move that made the method carry across both, and why the tool you reach for matters less than what you wrap around it.
Dear Negotiation Explorer,
Welcome to issue 40 of our NegoAI series.
Last week I told you why we built Von Neumann — the counterparty simulator that closes the gap between preparing for a negotiation and walking into one. That issue was about the why. This one is about the how — the part you can actually do this weekend.
Because on Monday I ran the first of two Maven Lightning Lessons — Build Your AI Negotiation System: No Code, Any Tool — and instead of slides, I did the thing I keep telling people to do: I built one of these systems live, from zero, in front of a room. Not the simulator — the agent upstream of it, the one that does the strategic preparation. And I built it twice, in two different tools, on purpose. The point wasn't "look, an agent." The point was what stays the same when the tool changes.
Part 2 is next Monday — same series, one level deeper. I'll come back to that at the end. First, here's what Part 1 showed: the build, the two prompts I typed, and the folder setup you can copy this weekend.
The Test I Wanted to Run
Two issues ago I made an argument: the model isn't the variable. Move the same prompt across Claude, ChatGPT, Gemini, Copilot and you get roughly the same answer — generic in, generic out — because the lever that actually matters is the harness you build around the model. Knowledge Base. System Instructions. Memory. The deal documents. The prompt is maybe a fifth of it.
That's a claim. Monday was the test.
Same case both times — the one I use in every workshop. Sarah Chen, Chief Procurement Officer at a pharma company, renegotiating an $18.5M consulting contract with McKinsey. A decade-long relationship. Senior partners who rarely show up; juniors running the work. Implementation rates below benchmark. Internal AI tools now matching a chunk of the routine strategic analysis. Three competitors circling with cheaper bids. Sarah's objectives going in: a real cost reduction, a shift from hourly billing toward outcome-based pricing, fewer consultants with guaranteed senior involvement — and the relationship preserved, because McKinsey still has the board's ear and deep institutional knowledge. Not a price fight. A restructuring.
Same task both times: produce a one-page negotiation preparation brief — interests, issues, BATNA, counterparty scenarios, the zone of possible agreement, value creation, objective arguments, a concession map, the questions to ask in the room.
The only thing I changed was the tool, and the harness I could fit inside it.
Tool 1 — Microsoft Copilot (the one you probably have)
I'll be honest: Copilot isn't the tool I reach for. But it's the tool most of you have, because it's inside the Microsoft environment your company already pays for — and increasingly it's the basic version, the one that nags you to upgrade for thirty euros a month. So that's where I started.
You can build agents in Copilot now — think custom GPTs, lighter. New agent, skip the wizard, name it, paste your instructions. The catch: the instruction field caps at about 8,000 characters. So the System Instructions had to be a stripped-down version — purpose, a few core rules, the voice, how to think, the output structure. Then I attached two files at the start of the chat: the case, and the Knowledge Base. (Copilot's "add specific websites" option doesn't fetch a knowledge base — it's a search filter — so the KB rides in as a chat-time document.) Then: analyze the attached case using the attached knowledge base as your only source of truth.
What came out was a competent prep brief. Interests laid out — cost efficiency, strategic continuity, implementation impact, senior attention, future-proofing against AI disruption. An issues agenda — fee level, pricing model, team size. A BATNA. Three counterparty scenarios: a defensive incumbent protecting revenue and its flagship client; an adaptive partner redefining the value proposition; and a low-probability play where McKinsey holds premium pricing as a status signal. A zone of possible agreement. A value-creation pass. Objective arguments, a concession map, key questions.
My honest read on stage: quite acceptable. Not deep — the scenarios were thin, the creative options ordinary — but a B2B negotiator could walk into a meeting with that and not be embarrassed. And remember: the System Instructions were under 8,000 characters. That's the ceiling Copilot put on me, not the ceiling of the method.
Tool 2 — A Claude Project (more headroom)
Then I rebuilt the same thing in a Claude Project. Same case, same task. Two things changed.
First, the System Instructions weren't 8,000 characters anymore — they were about 345 lines, roughly 20,000 characters. More sections, more structure, room for the parts that don't survive compression. Second, the Knowledge Base didn't ride in as a chat attachment — it lived inside the project, persistent, retrieved automatically. (And even that KB was a reduced version — about half my full one — and it was still plenty.)
The difference showed immediately. You could watch it reason — set the target, map McKinsey's likely position across four scenarios, generate diagnostic questions to figure out which scenario McKinsey is actually in, even ask whether this needs to be a consulting contract at all or could be restructured as an advisory-board arrangement.
The output that landed: four counterparty scenarios, each with a probability and a behavioral read — defensive retention at 40%, collaborative reinvention, strategic exit. And a genuine value-creation engine. First an additive-creativity pass — eight specific prompts run in sequence: future relationship, third parties, non-monetary value, risk-sharing, the unspoken problem, constraint inversion, cross-issue connections. Then a reframing pass. Then a three-tier synthesis — a hybrid retainer, a per-project model, an outcome-based component, a senior-weighted team. Negotiation-shaped output, not a template with the serial numbers filed off.
My read on stage: this gets to roughly 80% of what my full production prep agent does. For day-to-day negotiation work, that's not a compromise — it's more than enough. And both ChatGPT projects and Claude projects are at that level now; the gap to a purpose-built agent keeps narrowing.
One more thing worth noticing: the prompts I actually typed were almost embarrassingly short. For the Copilot agent — "Analyze the attached case study using the attached knowledge base as your only source of truth." For the Claude project, where instructions and KB were already loaded — "Analyze the attached case study. Use your system instructions. Use your knowledge base as your only source of truth." That's it. When the harness is built, the prompt gets short — it's the trigger, not a substitute for the work you did upfront. Make it yours: swap in your own case file and the name of your KB document; keep the "only source of truth" line — it's what stops the model wandering off into generic, web-averaged advice.
The One Move That Made It Travel
Two tools. Two fidelities — Copilot good, Claude Project better. But the architecture was identical across both, and one structural choice was doing the load-bearing work in each: keeping the System Instructions (how the AI behaves) and the Knowledge Base (what the AI knows) as two separate things — two files, two slots — not one merged blob.
I've made this case before, so I won't relitigate it here. What Monday confirmed is that it isn't a nice-to-have. Embed the two and reliability drops — the AI gets fuzzy about which part is behavior and which part is truth. Keep them separate and both tools, despite very different headroom, produced something you could use. The separation is what makes the method portable: it's the same two slots in Copilot, in a Claude Project, in a ChatGPT project. The model is the runtime. The architecture is the asset. Whatever your company hands you, the method carries.
(And if your next thought is "I'm not putting my deal strategy into an AI tool" — the honest answer is that these platforms are, today, more secured than the file shares most of us already trust without thinking: encryption in transit and at rest, independent audits, access controls, conversations private by default in shared projects, model training off unless you opt in. Privacy is a setting. Check it once, then use the tool.)
The Part That Isn't the Tool
Here's what a sixty-minute demo can't show you: the agent is only as good as what you feed it, and what you feed it doesn't live in the tool. It lives in a folder structure on your desktop — outside any project, surviving whatever tool you happen to be using this quarter.
If you've followed this series, you know the pitch — the workspace, the eight folders that mean your AI never starts from zero. I won't re-sell it. Here's the version that matters: the folders, then the habit almost nobody keeps. For a B2B negotiator, eight folders:
Identity — who you are as a negotiator, how you want the AI to work with you, your authority limits.
Knowledge Base — negotiation expertise plus your domain (procurement, sales, M&A), benchmarks, contract templates.
Accounts — one folder per counterpart you negotiate against repeatedly (suppliers if you buy, customers if you sell): how they negotiate, who really decides, deal history.
Active Deals — one folder per deal in flight: the brief, the strategy, where you are this week so a fresh chat picks up without re-explaining.
Memory — what compounds: deal debriefs, cumulative lessons, win-loss patterns.
Playbooks — your codified plays for recurring situations — renewals, sole-source pricing, objection handling — refined from your own debriefs, not a textbook.
Prompts — your reusable prompt library. Pull and adapt; don't rewrite from scratch.
Output Archive — the AI outputs worth keeping, organized by type.
You don't need all eight to start. If you build four, build Knowledge Base, Memory, Active Deals, and Prompts — the rest can follow. Same structure whether you buy or sell; only the contents change. And it's tool-agnostic on purpose: in six months "Claude Project" might be called something else, and you'll point the new tool at the same folders and lose nothing.
But folders only compound if you feed them. So here's the discipline — the part nobody does and everybody should:
End every session with a summary. Whatever you used — a chat, a project, Copilot — before you close it, ask: give me a summary of everything we did here. Drop it in the right folder. An output you liked goes to the Output Archive. A prompt that worked goes to Prompts. A pattern you spotted about a counterpart goes to that account's folder. Then, every couple of weeks, point the AI at the whole structure and ask it to declutter — you'll have accumulated redundant files, stale prompts, half-finished notes. Fill it, organize it, maintain it. That loop is the actual work. The folders are easy; the habit is the leverage.
What This Means for You
The tool is interchangeable. The workspace is yours.
If you take one thing from Monday's build, take this: stop waiting for the "right" AI. Open whatever your company gives you. Build the folders — eight if you're thorough, four if you're starting. Put your System Instructions in one slot and your Knowledge Base in another, separately. Run the same one-page prep on your next real deal. And end every session by filing what you learned.
None of that depends on which model you're on. All of it compounds. That's not a theoretical claim — in my research with 120 senior executives, AI-augmented preparation produced 48% higher individual deal value, and 84% higher joint gains when both sides used it (work I presented at the Harvard Kennedy School AI Negotiation Forum in January). The leverage is real. It just lives in the harness, not the headline.
What's Next on the Calendar
Monday May 18 — Part 2: Build Your AI Negotiation Workflow: No Code. Monday I built one agent. Next Monday I build the workflow — several agents working in sequence: an industry expert, Kahneman profiling the counterpart, Deepak building the strategy, and Von Neumann — the counterparty simulator you read about last week — running the rehearsal against it. Plus the rewind mechanic and the scoring. Part 1 showed you the foundation; Part 2 is the building. 45 minutes, free. Save your seat: https://maven.com/p/c70efd/build-your-ai-negotiation-workflow-no-code
Couldn't make Part 1? The recording's here: https://maven.com/p/d82a4e/build-your-ai-negotiation-system-no-code-any-tool. That's the foundation — Part 2 builds straight on it, live, next Monday.
Cohort 2 — Build Your AI Negotiation System, starts May 25. Five weeks, live, hands-on. The two Lightning Lessons show the architecture at two depths in two hours. Cohort 2 is where you build yours — for your industry, your role, your counterparts, your deals. Alumni reviews are on the landing page. Details: https://maven.com/nego-ai/build-your-ai-negotiation-system
This Week's Question
Which AI tool does your company actually put in front of you — and have you ever run real negotiation prep through it, or did you write it off after one generic answer?
Reply and tell me — I read every response.
