Home

You're Not Being Replaced by AI. You're Being Freed to Think Again

A reflection on how AI is transforming software engineering by handling routine tasks, allowing developers to focus on higher-level system thinking and architectural decisions.

Let's start with the house.

Picture a human architect, black coffee in hand, sketching a new structure - balance, airflow, angles, sun. All the human stuff. And then imagine the same person, minutes later, crawling under joists with a hammer, trying to figure out how many nails per stud. That's been software engineering.

We call ourselves "engineers" and then spend half the day arguing with linters.

But now, with the quiet, creeping arrival of competent AI agents (GPTs, Copilots, Claude-Variants-within-Slack) we're entering a shift. A real one. Not the disruption-buzzword one. The work feels different. And if you're paying attention, the most surprising thing isn't that AI might replace us.

It's that we've been doing too many non-engineering tasks for too long, and AI is finally letting us engineer.

The Fear (which turned out to be something else)

There was a moment, say mid-2020s, when developers all looked around and thought: "This thing is writing better regex than me. I'm doomed." But that fear, mostly stoked by VC slides and LinkedIn Hot Takes, missed a key nuance: most of us weren't hired for our semicolon placement. Take the most cryptical programming language out there and you still can learn its syntax in no time.

We were hired to understand systems. Model domains. Invent abstractions. And somewhere in the mess of Jira tickets, we forgot.

AI isn't actually replacing us. It's replacing the boring parts of us. The conversion function. The CRUD scaffolding. The pixel nudge. The parts that slow thinking down. And in doing so, it's forcing a confrontation: if I'm not spending all day doing that, then what am I for?

The answer: thinking. Again. Like we used to. Before meetings metastasized.

What AI Is Good At (and What It Isn't)

If software is like architecture, then AI is the general contractor.

You sketch, it builds. It nails. It does not pause. It does not second-guess. Ask it for a Python API that maps user input to GraphQL schema and validates against three edge cases—done. Clean. Testable. It's like magic, if magic occasionally hallucinated and ignored your last sentence.

But ask it to decide why that API exists, or what tradeoff to make between latency and complexity, and you'll get word salad. That's not a bug. It's a hint. The AI is showing you where your mind is still needed.

This is the hidden bargain: AI will stop you from typing obvious things. But it will also surface what's not obvious. That's where you start to work again.

The danger isn't the tool itself - it's mistaking the tool's output for understanding. When developers treat AI-generated code as gospel without grasping the underlying systems, we get what I've called Dunning-Kruger Driven Development: confidence without comprehension. AI makes this trap even more seductive because the code it produces often works. But "it works" isn't engineering, it's just the opening line of a much longer story.

And here's the thing about our whole housing-architecture metaphor that we've been riding throughout this entire discourse: you, the ostensible software architect, must maintain a hypervigilant supervision over what the AI is doing and - this is crucial — understand the labyrinthine 'why' behind each decision, each algorithmic turn, each syntactic choice. You are a responsible steward, locked in an endless dance of oversight and comprehension with a partner that never tires but also never truly understands.

The Return of Software Thought

If you've built anything substantial, you know the hardest parts aren't mechanical. They're conceptual. How should user states flow? Where does this logic live? What should this system feel like five years from now? These are aesthetic, architectural, time-bound choices. You can't copy-paste them.

These are questions about time, behavior, and structure. Not syntax.

And when the low-level work fades away, these are the only questions left. Which is the whole point: engineering becomes engineering again.

AI won't save us from bad ideas. It'll just build them faster. If you're not doing the thinking, the LLM will, and it's not better than you at it.

So Now What?

Junior devs still need to write a lot of code to learn (but even them can benefite from using AI, after all, it IS a tool in their toolbox). But seniors? Leads? Principals? We should be spending our time sketching. Modeling. Talking in systems. Let the AI nail the beams. You should decide where the sun hits the living room.

If you feel a shift in your day lately, more code reviews, fewer for loops, that's not accidental. That's AI quietly taking over the rote and asking, “What next?”

So the best engineers aren't fighting it. They're leaning in. They're reading more. They're diagramming again. They're talking to PMs about user flows instead of sprints.

Because this isn't the end of engineering.

It's the beginning of doing it better.