Over the past year, I’ve had the same conversation with at least thirty CTOs and VPs of Engineering. The setting changes: a conference hallway, a video call, a dinner after a workshop. The words change. But the conversation is always the same.
It starts with a number. “Our developers are 40% more productive.” Or 30%. Or 50%. The number varies, but the confidence behind it doesn’t. They’ve measured it. They have dashboards. The board has seen the slides.
Then comes the pause. The part where the voice drops half a register and the real question surfaces.
“So why does it feel like we’re shipping less?”
That question, the gap between what the metrics say and what the teams experience, is what Reimagine, Don’t Retrofit is really about. Not a methodology. Not a framework. A leadership conversation that most organizations haven’t had yet.
The Question Behind the Question
When a CTO tells me their teams are faster but not better, they’re not describing a tooling problem. They’re describing a process problem that manifests as a leadership problem.
The tools work. AI coding assistants genuinely reduce the mechanical burden of writing code. Developers spend less time on boilerplate, less time on syntax lookup, less time on the repetitive work that consumed hours but added no intellectual value. That part is real, and it shows up consistently across every industry I work in.
What’s also real is what happens next. Code reviews become bottlenecks because reviewers can’t keep up with the volume. Integration failures multiply because AI-generated services make different assumptions about the same API contracts. Security gaps appear because the AI was never given the organizational context that a senior developer carries in their head. Sprint velocity charts trend upward while customer-facing defects trend upward too, and nobody connects the two because they live in different dashboards.
I wrote about this pattern in The Mismatch Between AI Code Assistants and Enterprise Software Development several months ago. What I didn’t fully appreciate then was that the mismatch isn’t just technical. It’s organizational. The process itself, the ceremonies, the metrics, the coordination mechanisms, was designed for a world where AI didn’t exist. Pouring a fundamentally different capability into that container doesn’t make the container work better. It makes the container crack.
The CTOs I talk to sense this. They can feel the cracks. But the dashboards say everything is fine, and challenging a dashboard that says “40% more productive” requires a different kind of argument than most engineering leaders are used to making.
That argument is what the book provides.
The core of it is a single inversion: traditional development assumes humans generate and machines assist. AI-driven development reverses that. Machines generate, humans validate. Once that reversal happens, every structure built around human-only production begins to fracture.
Why This Is a Leadership Problem
There’s a reason I didn’t write a technical manual. The organizations that struggle with AI adoption aren’t struggling because their engineers lack skill. They’re struggling because their leaders are asking the wrong question.
The wrong question is: “How do we make AI work better in our sprints?”
The right question is: “Are our sprints still the right way to work?”
That second question is uncomfortable because it challenges investments that took years to build. Agile transformations. Scrum Master certifications. Velocity tracking infrastructure. Role structures. Careers built on mastering a specific way of working. When someone suggests the process itself might be the problem, the resistance isn’t just intellectual. It’s emotional.
I’ve been in those rooms. I’ve watched a Scrum Master’s face when the implication lands that the ceremonies she’s spent a decade perfecting might need to evolve. I’ve watched an architect process the idea that his design review board, the one he built from scratch and is justifiably proud of, might be slowing things down rather than protecting quality. These are skilled, dedicated people. The conversation isn’t about telling them their skills don’t matter. It’s about telling them the context has changed, and their skills need to be applied differently.
That’s a leadership conversation, not a technical one. And it requires something that no framework or methodology can provide: courage.
The Cloud Parallel
I’ve seen this pattern before. Not with AI, but with cloud.
In the early days of cloud adoption, most organizations tried to retrofit. They took their existing on-premises processes, their existing governance models, their existing operational practices, and applied them to cloud infrastructure. They treated cloud as a different place to run the same things the same way.
The result was predictable: higher costs, worse performance, more complexity. They were managing two operating models instead of one, and neither worked well.
The organizations that succeeded recognized that cloud wasn’t just a new place to run workloads. It was a new operating model. They redesigned governance for dynamic infrastructure. They adopted infrastructure as code. They rethought security for shared responsibility. They changed how they measured cost.
I wrote about this when discussing the silent power of cloud governance: the organizations that resisted governance because it felt like it would slow them down ended up slower than the ones that invested in it early. Governance didn’t constrain cloud adoption. It enabled it.
The parallel to AI-driven development is direct, and I believe the stakes are higher. Cloud changed where and how you run software. AI changes how you build it. The organizations that treat AI as “a faster way to do what we already do” will experience the same disappointment as the organizations that treated cloud as “a different place to run what we already run.”
What the Book Actually Argues
The book makes a single, sustained argument across thirteen chapters: the software development lifecycle itself needs to be redesigned for a world where AI is a central participant in how software gets built.
Not the tools. Not the infrastructure. The process. The way teams plan, elaborate, construct, validate, and measure their work.
The first four chapters diagnose what’s happening: the productivity paradox, the “faster horse” trap of retrofitting AI into Agile ceremonies, the governance gap when AI-generated code bypasses human controls, and the vibe coding phenomenon where speed without structure creates technical debt at a pace no organization can sustain.
If you’ve followed this blog, some of that diagnosis will feel familiar. I explored pieces of it in posts about why AI needs a new development lifecycle, the developer’s evolving role, and why AI fails on existing codebases. But blog posts are sketches. The book is the complete picture: the same stories followed across chapters, the same organizations tracked from confusion to clarity, the connections between problems that only become visible when you lay them out end to end.
The middle five chapters present the methodology: the AI-Driven Development Lifecycle, created by Raja SP and his team at AWS, which I’ve translated into enterprise practice. A new architecture of work built around Intents, Units, and Bolts. The brownfield reality. The redefinition of the developer’s role from code producer to judgment provider.
The final four chapters are the ones I think matter most for the CTO audience: governance as enabler, pilots that generate evidence instead of enthusiasm, metrics that measure value instead of velocity, and where all of this is heading as AI capabilities compound.
The Conversation You Need to Have
Here’s what I’ve learned from the CTOs who’ve moved past the dashboard and into the real work: the hardest part isn’t the methodology. It’s the conversation.
The conversation where you tell your leadership team: “The way we’ve been working isn’t wrong. It was right for the world it was designed for. But that world has changed, and we need to change with it.”
The conversation where you tell your engineering managers that velocity is no longer the metric that matters, and you need to find new ways to measure value.
The conversation where you tell your best Scrum Master that her facilitation skills are more valuable than ever, but the ceremonies she’s facilitating need to evolve.
The conversation where you tell your board that the “40% productivity improvement” they celebrated last quarter is real at the individual level and misleading at the organizational level, and that you need investment in process redesign, not just tool licenses.
Those conversations require evidence. Evidence requires a pilot. A pilot requires someone willing to start.
The Starting Line
I’ll close with the same thing I wrote in the book’s epilogue, because I haven’t found a better way to say it:
A book on a shelf changes nothing. A pilot with one team, one Intent, one quarter of honest measurement, that changes everything. It gives you evidence. Evidence gives you conviction. Conviction gives you the mandate to scale.
The book is available at reimaginedontretrofit.com. If you’re a CTO, VP of Engineering, or Engineering Director trying to figure out how AI changes the way your teams work, this is the argument I’d make if we had three hours together instead of a conference hallway.
And if you’ve already had the conversation, if you’ve already felt the gap between the dashboard and reality, I’d like to hear how it went. The methodology evolves with every organization that applies it.
Ricardo
