Home

We Finally Made Code Easy Enough That the Human Part of Engineering Became Impossible to Ignore

AI increased output. It also made coordination, trust, and emotional intelligence the limiting factor in engineering teams

There is a persistent fantasy in software engineering that the job, at its core, is about writing code.

A person sits down, applies enough intelligence and focus, and produces something that works. Everything else. Meetings, alignment, the slow and slightly painful act of explaining yourself to other people. Gets framed as overhead. Necessary, maybe, but secondary. Something to minimize.

This fantasy survives mostly because, for a long time, it was cheap enough to maintain.

You could get away with it. Systems were smaller. Teams were smaller. The cost of misunderstanding each other existed, but it did not dominate. You could still believe that the real work was the code and that everything else was a kind of administrative tax on top of it.

Then two things happened, and neither of them is especially controversial on its own.

Systems became more complex.

And AI made code dramatically cheaper to produce.

Individually, these are easy to process. Together, they do something slightly destabilizing.

Because when the cost of producing code drops, the relative cost of everything around it increases. Not in theory, but in the very practical sense that you start spending more time dealing with the consequences of work than with the work itself.

And at the same time, AI introduces a new, mostly invisible layer into how that work is produced.

You no longer just think and then write. You think with something. You iterate. You ask, refine, discard, converge. You arrive at an answer that feels, to you, coherent and even obvious, because you have lived inside the process that produced it.

But that process is largely private.

In The Orchestrators Era, I argued that engineers are becoming orchestrators of AI systems. Which is true, but incomplete in a way that only becomes clear when you zoom out one level.

Each engineer is now orchestrating their own AI, in their own context, through their own sequence of questions and corrections, producing outputs that carry the shape of that interaction without exposing it.

So when those outputs meet, inside a team, what you have is not just different solutions to the same problem.

You have different, partially invisible histories of reasoning colliding with each other.

And this is where the old fantasy breaks down.

Because the hard part is no longer producing the solution. It is making the path that led to that solution legible enough that other people can work with it, question it, or even trust it.

Which is a much less comfortable kind of work, and also the one that is quietly becoming unavoidable.

Engineering was always a coordination problem

There is a tendency, especially among people who have been rewarded for individual output, to treat coordination as something that happens after the real work is done. You solve the problem, you write the code, and then, unfortunately, you have to explain it to other people.

This framing is convenient. It is also wrong in a way that only becomes obvious at scale.

Because the moment you move beyond a single person working on a contained system, the work stops being additive and starts being combinatorial. Every new component, every new person, every new dependency does not just increase the amount of work. It increases the number of interactions between pieces of work.

And interactions are where things break.

The Mythical Man-Month is still cited for the almost cliché observation that adding people to a project can slow it down. What tends to get lost is why. It is not just onboarding cost or communication overhead in the abstract. It is that every additional person introduces new edges in the system. More paths where assumptions can diverge without anyone noticing immediately.

The uncomfortable implication is that a large portion of engineering effort has always been spent managing these edges. Clarifying intent. Aligning expectations. Correcting misunderstandings that only become visible after something fails in a non-obvious way.

And yet, culturally, this work has been treated as secondary. Something adjacent to engineering rather than central to it.

Part of this comes from what is visible.

Code is visible. It compiles, it runs, it can be reviewed line by line.

Coordination is mostly invisible when it works. There is no artifact that says, “this misunderstanding did not happen.” There is no diff for a conversation that prevented a problem before it existed.

So the industry built a quiet bias.

It rewards what it can see.

Which means it overvalues the production of code and undervalues the work required to make that code coherent inside a larger system of people.

This was sustainable, in the same way the earlier fantasy was sustainable. The cost of coordination failures existed, but it was often absorbed, delayed, or attributed to something else. A bug. A deadline. A vague sense that something went wrong without a clear origin.

What changes with AI is not that coordination suddenly becomes important.

It is that the margin for ignoring it gets thinner.

When producing code becomes faster and easier, the proportion of time spent dealing with misunderstandings, misalignments, and implicit assumptions increases. Not because people got worse, but because everything else got cheaper.

So the thing that was always there, slightly hidden and easy to dismiss, starts to dominate.

And once it dominates, it stops being optional.

The stereotype survives because it was convenient

There is a familiar story about engineers that gets repeated often enough that it starts to feel like a law of nature.

Engineers are bad at people. Strong technically, weak socially. Valuable, but in a narrow way that requires someone else to translate their work into something usable.

It is a clean story. It is also lazy.

Not because there are no engineers who struggle with communication. Of course there are. But because the story quietly assumes that this is an inherent trait rather than the result of how the industry has been structured.

For a long time, engineers were selected and rewarded under conditions where interpersonal skill was optional.

You could:

  • Deliver high output individually
  • Avoid difficult conversations
  • Communicate just enough to get by

And still be considered highly effective.

In some environments, this was not just tolerated. It was protected. The person who produced the most code, or solved the hardest technical problems, was allowed a wider margin of social friction because their output was easy to point at and defend.

So what you end up with is not a population that lacks social or emotional capacity, but a population that has been trained, quite consistently, to treat those capacities as secondary.

Which is a different claim, and a more uncomfortable one.

Because it means the behavior is adaptive, not inherent.

And adaptive behaviors change when the environment changes.

The stereotype persists partly because it simplifies something that is otherwise harder to talk about. It turns a structural issue into a personality trait. It allows teams and organizations to say, more or less, “this is just how engineers are,” instead of asking whether the system they built is selecting for and reinforcing that behavior.

There is also a quieter effect.

Once the stereotype is in place, it feeds back into itself.

Engineers who are already less inclined toward communication can lean into the role. Engineers who are capable of strong interpersonal work may downplay it because it is not what gets recognized. Over time, the distribution shifts just enough that the stereotype feels empirically true, even if it started as a distortion.

This is where something like Being and Nothingness becomes unexpectedly relevant, not as abstract philosophy but as a description of a very practical dynamic.

Sartre’s point, in a much broader context, is that people are not fixed things. They become what they repeatedly do, especially under the gaze and expectations of others.

Put differently, if you build an environment where communication is optional and individual output is everything, you should not be surprised when people become exactly that.

The problem is that the environment has changed, but the story has not.

AI does not remove the need for coordination. It increases it. It does not make interpersonal skill irrelevant. It makes the absence of it more visible and more costly.

So the old stereotype starts to break, not because engineers suddenly became different people, but because the conditions that allowed that version of the role to function are eroding.

And once those conditions are gone, what looked like a personality trait starts to look more like a gap.

AI as a private reasoning environment

If you look closely at how engineers are actually using AI, the change is not just speed. It is location.

Reasoning moved.

Not completely. Not in a clean, replace-the-old-way sense. But enough that a meaningful part of the work now happens inside a loop that is fast, interactive, and mostly invisible to anyone else.

Before AI, thinking left traces almost by default.

You searched. You read documentation. You skimmed threads. You tried something, it failed, you tried again. Even if no one followed every step, the path existed in places other people could inspect or reconstruct with some effort.

Now the loop is tighter.

  • You ask.
  • You get an answer.
  • You refine the question.
  • You discard two or three directions in minutes.
  • You converge.

By the time something reaches the codebase or a pull request, it often feels obvious. Clean. As if it emerged fully formed.

But that feeling comes from having lived through the process that produced it.

No one else did.

And the process itself, the sequence of prompts, assumptions, corrections, is usually gone or never shared in the first place.

This creates a subtle but important shift.

Two engineers can start from the same problem and arrive at different solutions that both make sense locally. Not because one is careless or less capable, but because the path they took through the problem space was different.

  • Different prompts.
  • Different framing.
  • Different intermediate steps that shaped the final answer.

And those paths are not visible unless someone makes a deliberate effort to expose them.

So what shows up in the team space is not the reasoning. It is the result.

A solution without its history.

Which means that when someone questions it, what they are really questioning is something they cannot see. And when the original author defends it, they are often defending a chain of reasoning they experienced but did not externalize.

This is where a lot of modern friction lives.

Not in the code itself, but in the gap between:

  • A conclusion that feels justified to one person
  • And a conclusion that feels arbitrary to someone else

Because the bridge between those two states, the reasoning that would make it legible, was never built.

And it is not built mostly because the tooling does not require it.

It is easier, faster, and often entirely sufficient in the short term to just present the output.

So the system drifts toward a new default.

More work produced. Less of the thinking behind it shared.

Which is efficient right up until the moment coordination matters.

And that moment is becoming the majority of the work.

Coordination is now alignment across private reasoning

If reasoning is increasingly happening in private loops, then coordination cannot just be about sharing artifacts anymore.

Artifacts are the end of the process.

The problem lives in everything that led to them.

So what teams are actually trying to do, whether they name it or not, is align across different, partially hidden interpretations of the same problem.

This is a different kind of coordination.

Before, you could often resolve disagreements by pointing at something concrete.

  • The code does X
  • The requirement says Y
  • The system behaves in a specific, observable way

Now those anchors are still there, but they are no longer sufficient.

Two people can look at the same artifact and see different things, not out of carelessness, but because the path that led them there shaped what they consider obvious, risky, or even relevant.

So disagreements start to feel strangely persistent.

  • You explain your reasoning. It still does not land.
  • They explain theirs. It still does not convince you.

At some point, the conversation stalls, not because one side is right and the other is wrong, but because the underlying context is misaligned.

This is where something like Nonviolent Communication becomes less about communication style and more about structure.

Rosenberg’s separation between observation, interpretation, and need maps almost directly to what is breaking here.

  • What are we actually seeing
  • What are we inferring from it
  • What are we optimizing for

In practice, most engineering discussions collapse these into a single layer.

  • A statement that sounds like a fact is often an interpretation.
  • A preference is presented as a requirement.
  • A tradeoff is hidden inside what looks like a technical decision.

When the reasoning path is already invisible, collapsing these layers makes it almost impossible to reconstruct what is going on.

So coordination becomes slower, not because people are less capable, but because they are trying to align without access to the structure of each other’s thinking.

There is also a cost that is harder to measure.

Every time someone has to push harder than necessary to get their point across, or back down without feeling understood, you accumulate a small amount of friction.

The Managed Heart describes this in a different context as emotional labor. The effort required to manage what you show, what you suppress, and how you present yourself so that interaction remains functional.

That effort is now part of engineering work in a more direct way.

Not as an abstract “be nice to your teammates” guideline, but as a practical requirement:

You have to decide how much of your reasoning to expose How to present uncertainty without losing credibility How to question someone else’s work without triggering defensiveness

None of this replaces technical skill.

But it starts to determine whether technical skill can actually be integrated into a team without excessive friction.

And once that friction crosses a certain threshold, the system slows down in ways that no amount of individual productivity can compensate for.

Emotional intelligence as tolerance for exposure

At this point, calling this “emotional intelligence” starts to feel slightly misleading, mostly because the term suggests control. As if the goal were to manage emotions the way you manage a system. Identify inputs, regulate outputs, keep everything stable.

That is not what is happening here.

What is required is closer to a tolerance for exposure.

Once you accept that a meaningful part of your reasoning is:

  • Shaped in a private loop
  • Partially invisible
  • And not automatically shared

Then working effectively with others means choosing to expose parts of that process that were never designed to be seen.

Not just conclusions. The uncertainty before them. The alternatives you discarded. The assumptions you did not question at the time because they felt obvious.

And that exposure is uncomfortable in ways that are easy to underestimate.

It risks:

  • Looking less certain than you feel
  • Inviting critique earlier than you would prefer
  • Slowing down your own momentum to make space for someone else to catch up

This is where something like Playing and Reality becomes useful, but only if you translate it out of clinical language.

Winnicott’s distinction between a “true self” and a “false self” is not about authenticity in a vague sense. It is about the difference between:

  • Acting in a way that maintains connection
  • And acting in a way that maintains control

In a team setting, especially under pressure, it is very easy to default to a kind of functional false self.

  • You present clean conclusions.
  • You hide the messy parts of your reasoning.
  • You defend your position more than you examine it.

From the outside, this looks like competence.

The code is there. The argument is coherent. Nothing is obviously broken.

But underneath, coordination degrades.

What others receive is a finished surface without access to how it was constructed.

Emotional intelligence, in this context, is not about being agreeable or easy to work with.

It is about resisting that collapse into surface-level competence.

It shows up in small, specific ways:

  • You explain how you arrived at something, not just what you arrived at
  • You notice when someone is confused and adjust before the conversation stalls
  • You can sit in a disagreement without immediately trying to resolve it through force or withdrawal
  • You admit when your reasoning is incomplete without treating that as failure

None of this is particularly dramatic.

But each of these actions reduces the gap between private reasoning and shared understanding.

And that gap is now where most of the work is.

So the skill is not “managing emotions” in the abstract.

It is staying present in a situation where your thinking is exposed, partially challenged, and not fully under your control, and continuing to engage without retreating into defensiveness or silence.

Which is less about polish and more about endurance.

The new competitive edge

If you follow the thread all the way through, the conclusion is not especially dramatic, but it is difficult to avoid.

The advantage is shifting.

Not away from technical skill. That would be too easy, and also wrong.

But away from isolated technical output as the primary signal of effectiveness.

When code becomes cheaper to produce, and reasoning becomes more private, the limiting factor is no longer how much you can generate on your own.

It is how well what you generate can be understood, trusted, and integrated by others.

Which is a different kind of skill.

And it shows up in ways that are easy to miss if you are still looking for the old signals.

The engineer who stands out is not necessarily the one who:

Produces the most code, finds the fastest local solution or navigates AI tools with the most fluency in isolation

It is the one who reduces the cost of coordination for everyone else.

Which means:

  • Making their reasoning legible without being forced to,
  • Exposing assumptions before they become problems
  • Reconciling different approaches without turning it into a zero-sum argument
  • Creating shared context where none exists by default

This is a form of leverage, but not the kind that shows up in metrics tied directly to output.

It shows up in what does not happen.

Fewer stalled discussions. Fewer late-stage surprises. Fewer situations where something has to be reworked because it was never fully understood in the first place.

And because these are absences rather than visible artifacts, they are easy to undervalue, at least until the system starts to depend on them.

Which it increasingly does.

There is also a quieter shift in what it means to be “senior.”

It used to correlate strongly with depth of knowledge and the ability to solve complex problems independently.

That still matters.

But seniority now also includes the ability to operate across these fragmented, AI-mediated reasoning spaces and make them cohere.

To take multiple, partially formed lines of thought and turn them into something that a team can actually move forward with.

That is not a soft skill layered on top of engineering.

It is a core part of the work.


The industry spent years acting as if interpersonal skill was an optional layer, something you could add once the real engineering was done.

That position becomes harder to maintain when the real engineering work starts to depend on how well people can align around things that were never fully visible to begin with.

AI did not remove the human part of the job.

It made more of it unavoidable.

Because now, more than before, the work is not just building the right thing.

It is making the path that led there visible enough that other people can actually work with it.