Dear Negotiation Explorer,

Welcome to issue 26 of our NegoAI series.

Last week, I shared a metaprompt for negotiation preparation—one prompt that generates a complete analysis framework covering your interests, their interests, BATNA, scenarios, and creative options.

Many of you tried it. Some asked: why does this work?

This week, I'm answering that question. I'll break down the metaprompt element by element, show you what's missing, and give you the improved version.

More importantly, you'll walk away with a framework you can apply to any prompt—not just negotiation.

The Old Advice vs. 2026 Reality

A year ago, prompting advice looked like this:

  • Write detailed personas with backstories

  • Use rigid templates for every request

  • Include multiple examples for every task

  • Specify exact output formats

This advice isn't wrong. But AI has evolved.

Models in 2026 are better at:

  • Inferring the right expertise from context

  • Following complex instructions without hand-holding

  • Structuring output sensibly without explicit format requirements

The elaborate prompting rituals matter less. What matters now is simpler—but most people still miss it.

The 5 Elements That Actually Matter

After teaching several AI and negotiation courses, I've found quality prompts come down to five elements. Here they are, ranked by importance:

1. Context (The King)

The information you provide determines everything. Garbage in, garbage out.

This isn't just "include some background." It's about completeness:

  • What's the situation?

  • Who are the parties involved?

  • What's the history?

  • What are the constraints?

  • What have you already tried?

A negotiation prompt with vague context produces generic advice. The same prompt with detailed background, constraints, relationship history, and stakes produces insights you can actually use.

The question to ask yourself: If I handed this context to a smart colleague who knows nothing about my situation, would they have enough to help me?

2. Specific Outcome

Not "analyze this negotiation" but "identify the three strongest arguments they might use against my position."

The more precisely you define what you want, the better the output:

Vague

Specific

"Help me prepare"

"Rank my interests by priority and identify which I could trade away"

"Analyze the other party"

"Give me 2-3 distinct scenarios for their BATNA and emotional drivers"

"Give me advice"

"What's the one thing I'm likely overlooking?"

Vague asks get vague answers. Specific asks get useful answers.

3. Purpose / Why

This is the element most people miss—and it was missing from last week's metaprompt.

When AI understands why you need something, it makes better judgment calls about:

  • What to emphasize

  • What level of detail to provide

  • What tone to use

  • What to include vs. leave out

Consider the difference:

Same context, same outcome request, different purpose:

"I need this analysis for a board presentation."

→ AI produces: formal structure, defensible reasoning, executive summary, anticipates skeptical questions

"I need this analysis to prepare for a difficult conversation tomorrow."

→ AI produces: practical talking points, emotional considerations, specific phrases to use, what to avoid saying

The context was identical. The outcome request was identical. But the purpose shaped everything.

This is what was missing from last week's metaprompt. We told AI what we wanted (interests, BATNA, scenarios, creative options) but not why we needed it or what decision it would support.

4. Standards & Constraints

What does "good" look like? What should AI avoid?

Last week's metaprompt included this:

"Ensure the BATNA analysis includes both explicit information from the context AND implicit information"

"Provide different scenarios that are non-overlapping"

"Revise your output before providing it with a quality self-assessment"

These criteria guided the output. Without them, AI makes its own assumptions about what "good" means.

Sometimes the constraint is more powerful than the positive instruction:

  • "Don't give me generic advice I could find in any negotiation book"

  • "Don't repeat information I already provided"

  • "If you're uncertain about something, say so rather than guessing"

5. Persona (When It Matters)

Last week's metaprompt used: "Act as Deepak Malhotra, the renowned Harvard professor."

This works because it accesses a specific framework—not generic negotiation expertise, but Malhotra's particular approach: finding the hidden constraints, reframing the problem, negotiating the impossible.

Persona matters when:

  • You want a specific lens or philosophy applied

  • You need the AI to commit to a perspective rather than hedge

  • Tone and style matter for the output

Persona matters less when:

  • The context already implies what expertise is needed

  • You're doing straightforward analysis

  • You don't care about the "voice" of the response

In 2026, persona is contextual—powerful when needed, optional when not.

Deconstructing Last Week's Metaprompt

Let's look at what we had:

Hi, I'm preparing for a negotiation. As attachment you have the full information and context.

What I would like you to do is to act as Deepak Malhotra, the renowned Harvard professor, and provide me with:

  1. My interest ranking

  2. My BATNA

  3. The plausible ZOPA

  4. The other party's BATNA and interests across 2-3 scenarios

  5. Understanding what is very important for them

  6. Their emotional drivers

  7. Creative options to enlarge the pie

Quality criteria:

  • Ensure the BATNA analysis includes both explicit and implicit information

  • Provide non-overlapping scenarios

  • Include at least one out-of-the-box scenario

  • Revise your output with a quality self-assessment

What it had:

Element

Present?

Assessment

Context

"As attachment you have the full information" — relies on user providing good context

Specific Outcome

Seven clear deliverables

Purpose / Why

Missing

Standards & Constraints

Four quality criteria

Persona

Deepak Malhotra

The gap: We never told AI why we needed this analysis.

The Improved Metaprompt

Here's the same metaprompt with the missing element added:

Hi, I'm preparing for a negotiation. As attachment you have the full information and context.

Why I need this: I have a meeting with the other party in [timeframe]. I need to walk in with a clear understanding of my priorities, their likely position, and creative options I can introduce if we get stuck. This analysis will directly inform my negotiation strategy and the specific offers I make.

What I would like you to do is act as Deepak Malhotra, the renowned Harvard professor, and provide me with:

  1. My interest ranking — what matters most to me, ordered by priority

  2. My BATNA — my best alternative if we don't reach agreement

  3. The plausible ZOPA — where our interests might overlap

  4. The other party's BATNA and interests — across 2-3 distinct scenarios

  5. What is very important for them — ranking their interests in each scenario

  6. Their emotional drivers — fears, pressures, motivations beyond the rational

  7. Creative options to enlarge the pie — at least 4-6, including unconventional ideas

Quality criteria:

  • Include both explicit information from the context AND implicit information you can infer

  • Scenarios must be non-overlapping — genuinely different profiles, not variations

  • Include at least one out-of-the-box scenario I might not have considered

  • Note your confidence level and assumptions

  • Revise your output before delivering with a quality self-assessment

What changed:

The "Why I need this" section. Three sentences that tell AI:

  • The timeline (meeting in [timeframe])

  • How I'll use it (inform strategy and specific offers)

  • What decision it supports (what to offer, what to prioritize)

This shapes everything that follows.

The Framework Checklist

Before you send any prompt—negotiation or otherwise—run through this:

1. Context

  • [ ] Would a smart colleague understand the situation from what I've provided?

  • [ ] Have I included constraints, history, and stakes?

  • [ ] Is anything important missing?

2. Specific Outcome

  • [ ] Have I defined exactly what I want?

  • [ ] Am I being specific or vague?

  • [ ] Could I make the request more precise?

3. Purpose / Why

  • [ ] Does AI know why I need this?

  • [ ] Does AI know what decision this supports?

  • [ ] Does AI know how I'll use the output?

4. Standards & Constraints

  • [ ] Have I defined what "good" looks like?

  • [ ] Have I specified what to avoid?

  • [ ] Have I asked for confidence levels or assumptions?

5. Persona (if relevant)

  • [ ] Do I need a specific lens or framework?

  • [ ] Would a persona improve the output, or is it unnecessary?

This Week's Exercise (15 minutes)

Step 1: Find a prompt you've used recently—negotiation or otherwise.

Step 2: Run it through the checklist above. Score each element 1-5.

Step 3: Identify the weakest element. Rewrite that part of the prompt.

Step 4: Run both versions (original and improved) and compare the outputs.

The goal isn't perfection. It's noticing what you've been missing—and seeing how small additions change the output.

What's Next

Next week, we'll tackle the other side: How do you know if the output is actually good?

You've built a quality prompt. AI gave you a response. But is it useful? Is it specific enough? Did it challenge your assumptions or just confirm them?

I'll share a framework for assessing AI output quality—so you can catch weak responses before you rely on them.

Live Session: February 13th

Want to move from prompting to building?

I'm running a live session on Maven showing how to create a simple AI negotiation assistant for purchasing—using Cassidy. It's designed for people who want a workflow that works for them automatically, not just one-off prompts.

Context. Outcome. Purpose. That's what makes a prompt work.

Reply

Avatar

or to participate

Keep Reading

No posts found