Sponsored by

Dear Negotiation Explorer,

Welcome to issue 35 of our NegoAI series.

Last week, I showed you Sarah's secret weapon — the Kahneman behavioral agent that analyzed her counterpart before they'd even met. Over the past ten issues, you've seen the full system come together. Metaprompting. The prompt framework. Context engineering. Knowledge Bases. Multi-agent orchestration. Behavioral profiling.

I promised to step behind the curtain and show you the tool I use to build all of this. But as I sat down to write, I realized the story isn't really about a tool. It's about a relationship.

Before

For over a year, I built AI agents on a different platform. Good platform. I built Deepak there, and several other agents. The results were solid — the agents worked, the output was useful.

But every session required effort to get aligned. I'd re-explain context. Re-establish preferences. Remind the AI how I think about negotiation, how I want output structured, what my quality standards are. Sometimes the interaction clicked beautifully. Sometimes it didn't. And the sessions that clicked — that understanding didn't carry over. Next session, I'd start again.

It wasn't the platform's fault. That's just how working with AI was. You bring the context, you get the output, the conversation ends. Whatever understanding developed during the session evaporated when you closed the window.

I didn't think of it as a limitation at the time. It's hard to miss something you've never experienced.

The Trigger

Last summer, Anthropic released the desktop version of Claude Code. Not the command-line tool — that's for developers. The desktop app. I installed it out of curiosity.

The first thing I learned — from YouTube tutorials, mostly — was that you could create a file called CLAUDE.md in your project folder. A simple markdown file that Claude reads at the start of every session. Project instructions. Methodology. Quality standards. Whatever the AI needs to know before you say a word.

I also learned about CLAUDE_CONTEXT.md — a file that tracks where you left off. Current state, decisions made, next steps. So when you start a new session, Claude picks up exactly where the previous one ended.

These two files changed the interaction immediately. I wasn't re-explaining anymore. Claude already knew the project, the methodology, and where we'd stopped. The first message of every session was productive — not orientation.

But that was just the beginning.

The Deepening

I kept learning. I watch between five and ten YouTube tutorials every week on Claude Code — not as a phase I went through, but as an ongoing practice. I read developer articles. I study how software engineers structure their repositories. I'm not a developer. I'm a negotiation professor. But the principles developers use to organize knowledge and workflow turned out to be exactly what I needed.

Then OpenClaw came out — an open-source framework built around the idea of giving Claude a persistent identity, accumulated memory, and structured autonomy. The philosophy clicked immediately: don't just tell the AI about your project. Tell it about yourself. How you think. How you work. What you've learned together.

I borrowed the philosophy and adapted it to my own workflow. I added three more context files:

INTERACTION.md — How we work together. My communication preferences. When to push back, when to execute. How I make decisions. The working style that produces the best collaboration.

MEMORY.md — A cumulative log of what we've learned across sessions. Decisions and why we made them. Patterns that work. Approaches we tried and rejected. Lessons from mistakes. This file grows with every project — it's the institutional memory of our collaboration.

USER.md — Who I am. My background, my credentials, my frameworks. So Claude doesn't just know the project — it knows the person behind it.

Five context files in total. CLAUDE.md, CLAUDE_CONTEXT.md, INTERACTION.md, MEMORY.md, USER.md. Each with a clear purpose. Together, they give Claude everything a genuine collaborator would need to do excellent work.

But here's what I didn't expect: the impact wasn't just that Claude had more information. It's that the nature of the interaction changed.

Understanding Each Other

This is the part that's hard to explain to someone who hasn't experienced it.

When I started with Claude Code, I was learning how to provide context — what to include, how to structure it, what Claude needs to reason well. At the same time, Claude was learning me — through the context files, through corrections I logged in MEMORY.md, through the accumulated patterns of how I work.

Both sides were getting sharper. In parallel.

I got better at briefing → Claude delivered better output. Claude surprised me with sharper reasoning → I trusted it with more complex tasks. I trusted it more → I invested more in the context → the collaboration deepened further.

I call this parallel thinking. Not because we think the same way, but because two learning curves were running simultaneously, reinforcing each other. My understanding of how to work with Claude and Claude's understanding of how to work with me — growing together, session after session, week after week.

It wasn't instant. It was a journey. Some sessions were clunky. Some experiments didn't work. I'd try a new structure for CLAUDE.md, realize it wasn't clear enough, and refine it. I'd log a correction in MEMORY.md and watch it change the next session's output. Gradually, the friction reduced and the quality compounded.

Seven months in, the difference is unmistakable. The AI that works with me today is not the same AI that worked with me in September — not because the model changed, but because the relationship did.

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads

The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

Not Just Memory — Thinking

If this were only about context and memory, the story would be interesting but not transformative. Plenty of tools can remember your preferences.

What surprised me — genuinely surprised me — was Claude's capability as a thinker.

I didn't expect this. Even after years of building agents, I didn't think AI could reach this level of reasoning partnership. But when you give Claude the right context — your methodology, your standards, your accumulated lessons, your working style — something shifts. It doesn't just recall and execute. It reasons.

When I brainstorm a newsletter issue, Claude pushes back on weak angles. Not generic pushback — specific objections grounded in the context of the series, the audience, the content strategy. When I build an agent, it questions architectural decisions and proposes alternatives I hadn't considered. When I draft something, it flags blind spots — inconsistencies between what I'm writing and what we decided three sessions ago.

The context files made Claude informed. The model's reasoning capability made it intelligent. Both together made it something I didn't have a word for until recently: a thinking partner.

That's the real shift. Not "AI remembers my preferences." That's table stakes. The shift is: AI that reasons with me, challenges me, and makes the work better because it genuinely engages with the substance — not just the surface.

The Scale of What Changed

Let me give you a sense of how deep this goes.

I don't have one or two projects in Claude Code. I have over twenty-five repositories. Every significant part of my professional work has its own structured workspace:

  • Agent development — Deepak, Kahneman, Kraljic, Sander, and others. Each agent built through the 8-phase methodology I described in issue 9, each in its own repo with full context files.

  • Content — This newsletter you're reading has its own repo. The content strategy, the voice guidelines, the publishing schedule, the lessons from every issue we've written — all in the context files.

  • Courses — The Maven course materials, certification program, course proposals. Each with its own workspace.

  • Methodology — A dedicated repo with the prompt files for each phase of agent development. The 8-phase framework from issue 9, encoded as a reusable workflow.

  • Research, strategy, academic writing, meeting notes — everything.

Each repo has the same five context files, adapted to its domain. When I open any project, Claude already knows the methodology, the current state, the accumulated lessons, and how I work. I don't re-orient. I pick up where I left off and we think together.

This isn't a side experiment. This is how I work. The 8-phase framework I used to build Deepak, the behavioral agent you saw last week, the newsletters you've been reading, the courses I teach — all of it runs through this system.

What Made It Possible

Three things. And none of them are technical skills.

Curiosity. I'm a negotiation professor watching developer tutorials. That sounds odd until you realize that negotiation expertise alone isn't enough anymore. You need both — negotiation expertise and AI literacy. The developers are ahead on the AI literacy axis. Their solutions — structured repositories, context files, version control — turned out to be directly applicable. I didn't need to become a developer. I needed to be curious enough to learn their principles.

Continuous learning. Not a one-time effort. Five to ten tutorials a week. Developer articles. Community discussions. New patterns, new approaches, new ideas — constantly. The landscape evolves fast. What worked three months ago can be improved today. Staying current isn't optional if you want the compounding to continue.

Experimentation. Not everything I tried worked. Some context file structures were too complex. Some patterns I borrowed didn't fit my workflow. I tried approaches, evaluated results, kept what worked, dropped what didn't. The willingness to experiment — and to fail — is what turned generic best practices into a system that's genuinely mine.

If those three qualities sound familiar, they should. They're the same qualities that make a great negotiator. Curiosity about the other side. Continuous learning from every deal. Willingness to try new approaches. You already have these. The question is whether you're applying them to how you work with AI.

The Competence Matrix

From my research, I've developed a simple framework to diagnose where you stand. It has two dimensions: negotiation expertise and AI literacy.

Low AI Literacy

High AI Literacy

High Negotiation Expertise

Traditional Expert

Augmented Expert

Low Negotiation Expertise

Novice

Superficial Technologist

Most of you reading this are in the top-left quadrant: Traditional Experts. Strong negotiators. Deep domain knowledge. Years of experience. But not yet leveraging AI to amplify that expertise.

The bottom-right is where the hype lives: Superficial Technologists. People who know the tools but lack the domain depth. They can make AI produce impressive-looking output — but they can't tell whether it's actually good. In negotiation, that's dangerous.

The goal is the top-right: Augmented Expert. Someone who brings deep negotiation expertise and the AI literacy to amplify it. That's where the 48% higher individual gains from my research come from — not from AI replacing expertise, but from AI augmenting it.

Everything I've described in this article — the tutorials, the structured workspaces, the experimentation, the parallel thinking — that's the journey from Traditional Expert to Augmented Expert. It's not about abandoning what you know. It's about adding a second dimension to it.

The three pillars — curiosity, continuous learning, experimentation — are how you move along the AI literacy axis without losing the negotiation expertise axis. You're not starting over. You're building on a foundation that took years to develop.

What This Means for You

I'm not telling you to use Claude Code. This isn't a product recommendation.

I'm telling you that the quality of your AI collaboration is directly proportional to how well you structure your workspace — regardless of which tool you use.

We've been building toward this principle all series. In issue 5, we covered context engineering — how structured context transforms AI output. In issue 7, you built a Knowledge Base and saw how organized expertise produces better results than scattered information. This issue is the same principle, applied to the entire working relationship.

Ask yourself:

  • When you start an AI session, does the AI know what you're working on?

  • Does it know how you work? Your standards? Your preferences?

  • Do your sessions build on each other — or start from scratch every time?

  • Is the AI getting better at working with you over time?

If the answer to most of these is no, the fix isn't a better prompt. It's a better workspace.

This Week's Exercise (20 minutes)

Pick one project you return to regularly — a negotiation you're preparing for, a recurring analysis, a client relationship.

Step 1 (5 minutes): Open a Claude Project or a ChatGPT Project — whichever you use. In the project instructions, write three things: what the project is about, how you like to work, and what you've learned so far. Be specific. "I prefer structured output with clear sections" is more useful than "be helpful."

Step 2 (5 minutes): Start a session inside that project. Work on something real. Notice how the interaction changes when the AI already knows the context — when you don't have to re-explain before you can think together.

Step 3 (10 minutes): After the session, add a section to your project instructions: what you learned from working together. A decision you made. A pattern that worked. A correction. Update it after every session.

That's where the compounding starts. Not from a single setup — from the accumulation. Each session deposits a little more understanding. Over weeks, the difference becomes unmistakable.

This connects directly to what we covered in issue 5 and issue 7. The principle is the same: structured context transforms AI output. The difference is that now you're structuring context not just for a task — but for the relationship.

The gap between using AI and working with AI isn't a feature.

It's an investment — in structure, in context, in the relationship itself. The returns compound.

But only if you start.

Questions? Reply directly — I read every response.

Reply

Avatar

or to participate

Keep Reading