Dear Negotiation Explorer,

Welcome to Week 23 of our NegoAI series.

Three years after its launch, ChatGPT is sitting inside boardrooms and deal rooms all over the world.

In my work with negotiators, one thing has become clear:

Today, to be an advanced negotiator, you need negotiation skills combined with AI literacy.

AI helps you prepare faster, see more angles, and enter key conversations with a clearer map of the negotiation.

But there is also a reliability gap we cannot ignore.

This week, I want to show you both sides – and give you simple prompts you can use in ChatGPT or Copilot to get the upside without falling into blind trust.

What AI is already doing for negotiation

In my research with 120 senior executives, negotiators who used AI support in their preparation:

Achieved 48% more individual value than those who did not

When both parties used AI, joint gains increased by 84.4%

Preparation time dropped from days to under 30 minutes

The part that matters most is simple:

AI helps negotiators achieve better outcomes more consistently.

It does this by helping you:

  • Map underlying interests on both sides, not just positions and numbers

  • Explore package deals and trade-offs in parallel, instead of testing one idea at a time

  • Develop distinct scenarios about the other party’s interests and alternatives (their BATNA, constraints, and likely moves)

You are still the one who decides what to offer, when to concede, and where to walk away. But you do it on the basis of a richer, clearer map

of the negotiation.

The mistake: treating a language model like a trusted advisor

There is, however, a persistent mistake:

We treat large language models as if they were smart colleagues, rather than powerful language machines with real limitations.

They are excellent at producing plausible text.

They are not designed to:

  • Maintain a stable world model of your deal or industry

  • Reason consistently across long chains of logic

  • Learn over time from mistakes in previous negotiations

In practice, this shows up in four ways negotiators will recognize:

  1. Hallucinations – confidently inventing “standard” clauses, norms, or practices

  2. Inconsistency – you ask the same question twice and get different strategies

  3. No real memory – they don’t remember what worked (or failed) last quarter

  4. Opacity – strong recommendations without exposing assumptions

That is the reliability gap:

The tool is strong enough to influence high-stakes decisions, but not reliable enough to be used blindly.

So the question in Year 3 is no longer “Can AI help negotiators?”

The real question is: How do we use it in a disciplined way?

A concrete example: a high-skill prompt for preparation

You don’t need workflows or code to improve reliability.

You can do a lot with a single, well-designed prompt.

Here is a generic template you can adapt to any important negotiation:

Prompt A – High-skill preparation prompt

You are an expert negotiation professor (for example, Deepak Malhotra at Harvard Business School). We are preparing for a high-stakes B2B negotiation. I will provide you with context and, if available, internal instructions from my side.

These are the tasks I would like you to complete:

1. Summarize my situation and objectives in this negotiation.

2. Identify and rank my interests (needs, objectives, constraints, concerns).

3. Analyze my best alternative to a negotiated agreement (BATNA). Include explicit and potential alternatives, and the consequences of no agreement.

4. Create three distinct scenarios for the other party, each with plausible ranked interests and BATNA.

5. Develop creative options that create value and “enlarge the pie”, opening opportunities for integrative outcomes.

Here are the quality standards for the output:

1. Outline interests beyond basic price.

2. Analyze the BATNA by determining both explicit and potential alternatives.

3. Be creative yet realistic with the other party’s scenarios.

4. Focus on delivering value-added insights beyond what a typical negotiator would do.

Please present the output with clear headings and bullet points for each section.

This prompt:

  • Narrows the role of the model to negotiation preparation

  • Sets clear tasks (interests, BATNA, scenarios, options)

  • Encodes quality standards directly in the instructions

Even without any technical setup, it already brings more structure and depth to your preparation.

Testing reliability: asking AI to assess its own work

You can go one step further and ask the model to evaluate its own output.

After the model answers Prompt A, paste this in the same conversation:

Prompt B – Quality assessment of the output

I would like you to perform a quality assessment of your output.

Please identify three or four factors against which to conduct this assessment, assign each factor a weight (from 1 to 100%), and then provide a score for each section of your output against each factor.

At the end of the quality assessment, please include recommendations for improvement.

This simple second step:

Forces the model to make its evaluation criteria explicit

Reveals where it sees weaknesses in its own work (missing interests, weak BATNA, shallow scenarios, etc.)

Creates a feedback loop: you can immediately ask it to improve the answer based on its own recommendations

The assessment will not be perfect.

But it will help you see:

Patterns in how the model thinks about your negotiation

Blind spots that you should not delegate (for example, legal risk, sensitive relationship dynamics)

You move from “I trust it / I don’t trust it” to:

“I’m actively probing its reliability on this negotiation.”

This Week’s Exercise

Take 20 minutes to use AI to prepare one real negotiation, and test its reliability.

  1. Pick one real negotiation.

    Client, supplier, partner, or internal stakeholder

  2. Adapt Prompt A to your case.

    Insert your context, your role, your counterpart

    Keep the structure: interests, BATNA, three scenarios for the other side, creative options

  3. Run Prompt A in ChatGPT or Copilot.

    Read the full response once without editing

    Highlight anything that feels surprising, unclear, or implausible

  4. Run Prompt B in the same conversation.

    Read the quality assessment

    Note at least two weaknesses the model itself identifies (e.g., missing data, shallow analysis, overconfidence)

  5. Update your preparation.

    Decide which parts of the AI’s analysis you will keep

    Decide what you need to check in the real world (numbers, constraints, internal politics)

    You have now used AI both to expand your thinking and to test its own work.

Three years after ChatGPT, the real difference is not who “uses AI” and who doesn’t.

The difference is:

Who treats it as a casual chatbot,

and who combines deep domain and negotiation expertise

with disciplined prompts and simple quality checks.

Reply

or to participate

Keep Reading

No posts found