Last month, I was facilitating a Mob Elaboration session with a development team at a company going through its first AI-DLC adoption cycle. The Product Owner had stated the intent: a new feature for their customer onboarding flow. Within minutes, the AI had generated an initial set of user stories, acceptance criteria, and a proposed decomposition into independent units of work.
Then something happened that I’ve now seen multiple times, but that still strikes me every time.
The room went quiet.
Not because the output was wrong. It was actually quite good. The silence came from a different place. A senior developer, someone with fifteen years of experience, leaned back and said: “So… what do I do now?”
It wasn’t a complaint. It was a genuine question. For his entire career, the value he brought to a team was measured in the code he wrote, the architectures he designed from scratch, the problems he solved by thinking hard and typing fast. And now, in the span of a few minutes, AI had produced a first draft of work that would have taken his team days.
That moment, the pause between the old role and the new one, is where most organizations are right now. And very few are talking about it honestly.
The Shift Nobody Prepared For
I’ve written before about the mismatch between AI code assistants and enterprise development and about the challenge of responsible AI in the enterprise. Both posts focused on process and governance. Neither addressed what I now believe is the harder question: what happens to the people.
Not in the abstract “AI will change jobs” sense. In the concrete, daily sense. What does it feel like when AI generates the first draft of your domain model? What does it mean for a tech lead when three days of planning collapses into three hours?
These questions are playing out in real teams right now. And the answers will determine whether AI-driven development succeeds or stalls, because processes don’t adopt themselves. People do.
From Initiator to Validator
The most fundamental shift in AI-driven development is deceptively simple to describe and profoundly difficult to internalize.
In traditional development, humans initiate and AI assists. You write the code; the AI suggests completions. You design the architecture; the AI fills in boilerplate. The human is the driver. AI is the passenger offering directions.
In AI-DLC, that dynamic reverses. AI proposes. Humans validate.
AI breaks down intents into user stories. AI generates domain models. AI produces code, tests, and infrastructure. At every step, the human role is to review, challenge, correct, and approve.
This sounds like a promotion on paper. In practice, it triggers an identity crisis.
Developers have spent years, sometimes decades, building their professional identity around creation. The satisfaction of solving a hard problem, the pride of elegant code, the flow state of building something from nothing. When AI takes over the generative work, that identity doesn’t just shift. It fractures.
I’ve seen three common reactions:
The skeptic dismisses AI output reflexively. “I could have done this better.” Sometimes they’re right. Often, they’re protecting their sense of relevance rather than evaluating quality.
The rubber-stamper approves everything without scrutiny. “Looks good, ship it.” This is the most dangerous reaction, because it eliminates the human oversight that makes AI-driven development safe.
The validator, and this is the role we need to cultivate, engages critically. They read the AI’s output not as something to accept or reject wholesale, but as a proposal to interrogate. Why did the AI group these stories this way? Does this domain model capture the business rules we discussed? Is this architectural decision the right trade-off for our constraints?
The validator doesn’t write less. They think more. And that thinking is where the real value lives.
What the New Skills Actually Are
If the developer’s role shifts from creation to validation, what skills matter most?
It’s tempting to say “prompt engineering,” and yes, knowing how to interact effectively with AI matters. But that’s a technique, not a skill. The deeper capabilities that define a great developer in the AI era are older and more human than any prompt template.
Domain knowledge. AI can generate a domain model, but it can’t know that your company’s “customer” entity behaves differently in the onboarding context versus the billing context because of a regulatory quirk from three years ago. The developer who carries that institutional knowledge becomes irreplaceable, not because they can code faster, but because they can catch what AI gets wrong.
Architectural judgment. AI will propose a design pattern. It might even explain the trade-offs. But deciding whether event-driven architecture is right for this team, this timeline, this compliance environment requires judgment that comes from experience, not from training data.
Risk intuition. A seasoned developer reads AI-generated code and feels when something is off, not because of a syntax error, but because of a subtle assumption that doesn’t hold in production. That intuition, built over years of debugging, incident response, and late-night deployments, is exactly what AI lacks.
Communication and facilitation. In AI-DLC’s collaborative rituals, Mob Elaboration and Mob Construction, the ability to articulate why an AI proposal is wrong, to build consensus around a correction, and to keep a room of diverse perspectives aligned becomes a core technical skill. Not a soft skill. A technical skill.
As I wrote in Bridging Strategy and Execution in Tech Leadership, the most valuable capability in technology leadership is translating between vision and reality. In AI-driven development, every developer needs that capability, not just the CTO.
The Room Changes Everything
One of the things that surprised me most about AI-DLC in practice is how much the collaborative rituals, Mob Elaboration and Mob Construction, change the team dynamic.
In traditional Agile, collaboration happens in ceremonies: planning, standup, review, retro. Between ceremonies, developers work largely alone or in small pairs. The work is distributed, and coordination happens through tickets, pull requests, and Slack threads.
In Mob Elaboration, the entire team sits in one room with a shared screen. AI generates artifacts in real time. The Product Owner, developers, QA: everyone sees the same output at the same moment and reacts together.
This creates a dynamic I hadn’t anticipated: collective validation is qualitatively different from individual review.
When a developer reviews AI-generated code alone, they bring their own perspective and biases. They might miss a business rule they’re not familiar with. They might approve a design pattern they’re comfortable with even if it’s not the best fit.
When a mob validates together, the Product Owner catches the business logic gap. The QA engineer spots the missing edge case. The infrastructure-minded developer questions the deployment model. The junior developer asks the “obvious” question that turns out to be the most important one.
The AI’s output becomes a shared object that the team shapes together. And because everyone was in the room when decisions were made, there’s no ambiguity about why something was built a certain way. The alignment that traditional methods try to achieve through documentation and handoffs happens naturally through shared experience.
I’ve seen Mob Elaboration sessions compress what would have been two weeks of back-and-forth between product, architecture, and development into three hours. Not because people worked faster, but because the conversation happened once, with everyone present, with AI doing the heavy lifting of generation while humans did the heavy lifting of judgment.
The Uncomfortable Truth About Seniority
Here’s something that doesn’t get discussed enough: this shift reshuffles the seniority hierarchy.
In the old model, seniority was largely correlated with technical depth. The senior developer knew the codebase intimately, could write complex algorithms from memory, and had mastered the tools. Junior developers aspired to that mastery.
In the AI era, much of that technical depth becomes commoditized. AI can write the complex algorithm. AI knows the framework’s API better than any human. AI doesn’t forget syntax.
What AI can’t do is understand context, navigate ambiguity, and make judgment calls under uncertainty. And those capabilities don’t always correlate with years of experience.
I’ve watched junior developers with strong domain curiosity and good communication skills become highly effective validators, sometimes more effective than senior developers who struggle to let go of the generative role. I’ve also seen senior developers who embrace the shift become extraordinary: their deep experience, combined with the ability to steer AI rather than compete with it, makes them force multipliers.
The point isn’t that seniority doesn’t matter. It’s that the basis of seniority is changing. Technical depth still matters, but it’s no longer sufficient. Judgment, communication, and the ability to operate in a human-AI collaborative model are becoming the differentiators.
Organizations that don’t acknowledge this shift will face a quiet problem: their best AI-era talent won’t fit neatly into the old career ladders, and their most experienced people may resist the transition not because they can’t adapt, but because nobody told them what adapting looks like.
What Leaders Need to Do
If you lead a development organization, the people dimension of AI adoption is not a “nice to have.” It’s the bottleneck.
You can buy the best AI tools. You can redesign your processes. But if your developers don’t understand their new role, or worse, if they feel threatened by it, adoption will stall.
Here’s what I’ve seen work:
Name the shift explicitly. Don’t let developers figure it out on their own. Tell them: “Your role is changing from code creator to system validator and decision-maker. That’s not a demotion. It’s an elevation.” Say it clearly, say it early, and say it often.
Redefine what “good” looks like. If your performance reviews still reward lines of code, pull requests merged, or stories completed, you’re incentivizing the old model. Start measuring the quality of validation, the decisions made, the risks caught, the alignment achieved.
Invest in facilitation skills. Mob Elaboration and Mob Construction don’t run themselves. The facilitator role (keeping the room focused, ensuring AI output gets properly scrutinized, managing time and energy) is a skill that needs training and practice. Don’t assume your Scrum Masters can do it without preparation.
Create safe spaces to practice. The first time a developer validates AI-generated code in a mob setting, it will feel awkward. That’s normal. Run low-stakes pilots where the goal is learning the dynamic, not shipping production code. Let people build the muscle before the stakes are high.
Watch for the rubber-stamp pattern. It’s the silent killer of AI-driven development. If your teams are approving AI output without meaningful scrutiny, you don’t have a collaborative model. You have AI-unattended development. And that’s a governance and quality disaster waiting to happen.
The Human Side Is the Hard Side
Every technology transition I’ve lived through, from client-server to web, from on-premise to cloud, from waterfall to Agile, had a technical dimension and a human dimension. The technical dimension got most of the attention. The human dimension determined the outcome.
Tools scale instantly. Identity does not.
AI-driven development is no different. The models will keep getting better. The tools will keep evolving. The processes will mature.
But the question that will separate organizations that thrive from those that stumble is not “How good is your AI?” It’s “How well did you prepare your people for a fundamentally different way of working?”
The developer’s role isn’t disappearing. It’s being reforged. The ones who embrace it, who learn to think in terms of validation, judgment, and collaborative intelligence, will be more valuable than ever.
The question is whether your organization is helping them get there, or leaving them to figure it out alone in a quiet room, wondering what they’re supposed to do now.
AI will keep proposing. The question is whether your developers are prepared to decide.
Ricardo
