Since around December 2025, I've handed off basically all code writing to Claude Code. Not as an experiment. Not as a productivity hack I'm trying out. It's just how I work now. I describe what I want, I review what I get, I iterate. The code gets written. I don't write it.
That shift happened faster than I expected, and it changed how I think about what this job actually is. Not what it might be someday. What it is right now, today, for anyone paying attention.
Nobody can credibly predict what software development looks like long term. AI is making such massive leaps that it's hard to think beyond a year or two. But I don't need to predict the future. The shift already happened. The developers who thrive aren't the ones who write code. They're the ones who think the most clearly about what needs to be built and why.
Requirements Are the Job Now
This is the highest-leverage thing most developers underinvest in. When AI generates a working implementation from a well-written spec, the spec is the work.
Defining business goals precisely. Translating vague stakeholder intent into actual system behavior. Specifying edge cases, failure modes, performance expectations. None of that is new work. But it used to be a step you rushed through to get to the "real" work of writing code. Now it is the real work.
The developer who can take an ambiguous ask and produce a clear, testable description of what "done" looks like? That person is going to be wildly more valuable than the one who can bang out a feature in a weekend but can't explain what it's supposed to do.
Think About Risk Before You Build
The temptation with AI-assisted development is to move fast and skip the thinking. You can generate a working implementation in minutes. But "working" and "correct under all conditions" are very different things.
What happens when this service goes down? What data are we exposing? What regulations apply? What's the blast radius of a bad deploy? These questions matter more when the cost of building the wrong thing drops to near zero. You'll build it faster, sure. You'll also build the wrong thing faster if you're not careful.
Anyone can ship a feature. Not everyone can tell you what breaks when it fails, what it costs to maintain, or what it exposes to an attacker. That judgment is the job now.
Architecture Still Needs a Human
AI can generate code in any framework you point it at. It doesn't have opinions about whether that framework is the right choice for your constraints. That's still on you.
Picking the right stack based on cost, scalability, maintainability, team capability, vendor risk. Balancing velocity against long-term health. Optimizing for change rather than just initial delivery. These are judgment calls that require context AI doesn't have.
If anything, architecture becomes more important. The speed of implementation means you'll build the wrong thing faster if you aren't thoughtful about the foundation.
Integration Is Where It Gets Hard
Systems don't exist in isolation. The complexity of modern software isn't usually in the components. It's in how they connect. APIs, data schemas, service boundaries, event contracts, authentication flows.
This is work that can't be fully delegated to a code generator because the constraints are as much political and organizational as they are technical. Two teams with different assumptions about an API contract is still one of the hardest classes of bugs to find. The developer who defines clean boundaries and gets everyone to agree on them is preventing those bugs before they exist.
Orchestrating AI Is a Skill
This is the genuinely new part. I don't use Claude Code as a typing assistant. I use it the way I'd work with a fast, tireless junior developer who needs clear direction. I describe intent. It generates implementation. I review, test, and iterate. The code is an artifact derived from a spec. I ship it. I didn't type it.
Since making that switch at the end of 2025, the ratio has completely flipped. Almost all of my time goes into describing what I want, reviewing what I get, and designing the tests that prove it works. The total output is higher than it's ever been. The nature of the work is fundamentally different.
The developers who resist this shift aren't going to be outperformed by AI. They're going to be outperformed by developers who use AI well.
Own the Outcome
The job isn't "write code." It never really was, but we could pretend it was when code was the bottleneck. Now that implementation is basically free, the value is in outcomes.
Monitor production behavior. Not just whether the deploy succeeded, but whether the system is actually doing what the business needs. Iterate based on real-world usage, not ticket descriptions. Stay close enough to the business to know when needs change.
The developer who ships a feature and moves on is less valuable than the one who ships it, watches how it performs, and comes back with data about what to do next.
Learn the Tools or Get Left Behind
I want to be blunt about this. It doesn't matter where you are in your career. Junior, senior, staff, principal. If you aren't putting real effort into learning how to use AI tools effectively, you are falling behind. Not slowly. Quickly.
This isn't like a new framework you can afford to skip because the current one still works fine. This is a fundamental change in how software gets built. The developers who learn to work with AI are producing more, at higher quality, in less time. That gap is only going to widen. If you're sitting it out waiting to see how things shake out, the things have already shaken out. You're just not looking.
If you're early in your career, the best investment isn't learning another framework. If you're senior, it's not resting on the assumption that your experience alone keeps you relevant. For everyone, the investment is the same:
- Writing clearly. Requirements, specs, architectural decision records, incident reports. Clear writing is clear thinking, and clear thinking is what AI needs from you.
- Understanding systems holistically. Not just the code, but the infrastructure, the data flows, the failure modes, the business context. AI can generate components. You need to understand how they fit together.
- Working with AI effectively. Knowing how to describe what you want, validate what you get, and iterate when it's wrong. This is a skill. It takes practice. Start now.
- Evaluating tradeoffs. Every decision has costs. The developer who can articulate those costs is the one who gets trusted with the hard problems.
The job title stays the same. The leverage changes. And the gap between developers who embrace these tools and those who don't is going to be the defining career differentiator of the next few years. Lean into it. It's the most interesting this job has ever been.