As a software consultant, I’ve noticed a pattern play out at nearly every client over the last year. A team adopts Cursor or Claude Code or Copilot and their productivity, especially on greenfield tasks, jumps noticeably. And then, someone asks: “If the AI can do this, what are the developers for?”
It’s a valid question, and one I’ve been thinking about myself as AI as improved at many software development tasks over the last year or so. Using these tools daily on client projects, internal work, side project has made the answer clear to me. No matter how good AI gets at some of our daily tasks, developers will still be needed for their systems thinking, their setting of guardrails for the AI, and most importantly, their human judgement.
Software Development Was Always Repetitive
For the last many decades, a huge part of professional software development is following the patterns that already exist in a codebase. You might stand up a new REST API, and that first endpoint is genuinely hard. You’re making real decisions about design patterns, systems architecture, URL structure, authentication/authorization, database access, caching, and error handling. But for endpoint two through twenty, you’re just following the recipe you already wrote. We run into this a lot with client teams. They might not have the experience to architect a well-designed system from scratch, but once we get them going on a good pattern, they can easily follow it for new features.
AI is also very good at following recipes. Point it at a codebase with established conventions and it’ll crank out the next endpoint, the next service method, or the next React component in the same shape as the ones before it. In that way it’s like a junior developer who reads the existing code before writing new code and carefully follows the established patterns.
So yes, a meaningful chunk of what we used to spend our days typing is now automatable. My fingers don’t hurt at the end of the day anymore, and I don’t think they’re going to again, especially as more developers leverage voice chat capabilities with their AI tools.
AI Doesn’t Know What Good Looks Like
Here’s what I keep running into. These models are trained on an internet’s worth of code. A lot of that code, most of it really, is mediocre. Developers who have tried to find solutions to their questions on StackOverflow for the last deccade already know this. Whether it is tutorial snippets or Reddit, Stack Overflow answers written in a hurry, or open source projects with no review process, a lot of the code on the internet (and in the world) is poor-to-mediocre. These models are fundamentally averaging machines the guess at the most likely next word or token. If you give them a vague prompt, you will most like get back something that looks like the average of what’s out there on the internet. It might eventually compile, it might work, but it won’t reflect the specific decisions and tradeoffs your project needs.
I’ve never seen an AI look at a codebase and suggest a better architecture unless it’s specifically asked to by the developer running it. It probably won’t notice that your auth middleware has a subtle timing vulnerability. It won’t propose event sourcing because it picked up on a pattern of concurrency bugs in your shared state. It doesn’t know your deployment constraints, your team’s skill level, your future scalability needs, or the fact that your biggest customer hammers one particular endpoint at the same time every Monday morning.
Those are judgment calls that require human experience. In my experience, the developers getting the most out of AI right now are the ones whose judgment is already sharp. They can tell when the model missed and know exactly how to correct it.
The Specification Problem Remains
We have a joke in consulting: clients say they have “detailed specs” and then hand you the title of their project. I’ve been doing this long enough to know that the gap between what someone says they want and what they actually need is where most project risk can be found.
AI has the exact same problem. A quick, vague prompt by someone without experience can generate a lot of impressive-looking code fast. Then you spend three times as long iterating it into something that actually meets the requirements–requirements you should have pinned down before you started generating anything.
The teams I’ve seen get real traction with AI-assisted development aren’t the ones who figured out some magic prompt template. They’re the ones who already had their software fundamentals together: clear requirements, fast CI pipelines, strong automated test coverage, pull request reviews that actually catch things. Those aren’t AI skills. Those are engineering discipline. AI just raised the stakes on them.
If your feedback loops are slow (if you don’t know your code is broken until it’s in staging) then AI is only going to make you produce broken code faster, and that’s not a win.
What About the Next Generation?
This one’s harder to talk about, and I think our industry is being too quiet about it. Hiring managers at companies I work with are pausing junior roles. Not eliminating them all together, just pausing hiring for them, so they can see how far their current teams can scale with AI. Honestly, it’s rough for new graduates right now.
I lived through the outsourcing scare of the mid-2000s. Teams I worked on lost people to offshore replacements. For a while it felt like the bottom was falling out. But it didn’t. The work evolved, the value proposition shifted, and eventually it stabilized. Those who moved up the value chain survived. I think something similar is happening with AI, but I’m not going to pretend it’s happening on the same timeline. AI is a much more quickly moving disruption.
What I’ll say is this: the developers entering the field now need to lead with judgment earlier than my generation did. Writing decent algorithms or code that compiles aren’t differentiators when AI can do that. Understanding why certain decisions matter and being able to look at generated code and say “this won’t scale” or “this misses the actual requirement,” that’s what will stand out in the marketplace. It’s a higher bar, and the industry owes it to junior devs to be upfront about that instead of pretending nothing has changed. Recently, Mark Russinovich and Scott Hanselman from Microsoft published an excellent paper about what organizations can do to avoid some of the pitfalls of this era for junior engineering talant.
Developers Are For Judgment
After about year of working with these tools, here’s where I’ve landed. Developers aren’t paid for typing code. The best ones never really were. It just felt that way because typing in code took up so much of the day. Developers are paid for knowing which endpoint needs the cache and which one doesn’t. They are paid for catching that the generated migration will lock a production table for twenty minutes and slow down other critical tasks. They are paid for understanding that what product owner thinks they need isn’t what they actually need, and to help them see it.
AI now handles the mechanical translation of intent into code. Developers are the ones who make sure that intent is right in the first place and the ones who know how to fix it when it isn’t. And that’s not a new skill. It’s the skill that was always underneath the typing. We just get to spend more time on it now.
I recently went deep on the tactical side of all this with my guest Cory House on an extended edition of the Blue Blazes Podcast. We discussion choosing between AI harnesses, model selection, multi-agent workflows, and how to actually structure your prompts and feedback loops. If you’re adopting AI-assisted development on your team, give it a watch or listen.


