There's a question nobody asks in AI strategy meetings.
It's not about which model to use. Not about data quality. Not about ROI projections. Not about integration architecture. Those questions get hours of executive attention.
The question that gets skipped: Who is accountable for the workers whose jobs you're about to automate?
Not "how will we manage change." Not "what's our reskilling strategy." Not "how do we communicate this." Those are abstractions that let everyone avoid the specific question.
Who, by name, is accountable for Maria in accounting when AI takes over invoice processing?
Who is responsible for developing James in customer service when chatbots handle his routine calls?
Who ensures that Sarah in operations gets a fair shot at transitioning when her Level 1 work gets automated?
The answer, in most organizations, is no one.
The Accountability Gap
Let me be direct about what I've observed. Organizations spend months on AI vendor selection. They spend weeks on data preparation. They invest significant resources in prompt engineering and model fine-tuning.
Then they hand the human dimension to HR with a vague mandate around "change management" and "reskilling."
This is backwards.
Consider the people involved in a typical AI implementation. The direct manager is measured on efficiency gains. HR is responsible for programs, not individuals. The executive sponsor is focused on the business case. The technology team is focused on the technology.
Who is specifically accountable for the individual workers affected?
Consultants talk about "change management" as if naming the category solves the problem. Vendors talk about "upskilling" as if training courses address career disruption. Executives talk about "reskilling" as if retraining automatically translates to re-employment.
These are abstractions. Abstractions diffuse accountability. And diffused accountability means no one is actually responsible when Maria's job disappears and no viable path forward materializes.
A Framework for Human Accountability
Elliott Jaques, who spent over 50 years studying organizational structure, developed a concept that directly applies here. In his Requisite Organization theory, he identified a role called the "Manager-Once-Removed" (MoR).
The MoR isn't just a higher-level manager who happens to be two levels up. It's a specific structural accountability. The manager-once-removed is responsible for assessing the potential of people who work for their direct reports, and for mentoring them on career trajectory.
In Jaques' framework, the MoR is specifically accountable for:
- Assessing subordinates-once-removed for potential in future roles
- Mentoring them on career trajectory (not day-to-day skills, which is the direct manager's job)
- Ensuring fair treatment when conflicts arise with the direct manager
- Making talent decisions that require distance from immediate operational pressures
Why does this matter for AI implementation?
The direct manager implementing AI is structurally conflicted. They're under pressure to show efficiency gains. They're measured on automation metrics this quarter. They're not in a position to objectively assess which workers should be retained for higher-level work, which should transition to other roles, and which need development for future capability.
I've come to understand this as a fundamental design problem, not an empathy problem. The direct manager isn't malicious. They're just not positioned to take the longer view. They're too close to the operational pressures.
The MoR can take that longer view. They're not judged on this quarter's automation metrics. They have visibility across multiple teams. They can ask the question the direct manager cannot objectively answer: What's the right thing for this person AND the organization?
The Connection to Work Levels
Here's where the framework from the earlier articles connects.
When AI handles Level 1 and Level 2 work, humans should theoretically move up to Level 3 work. Strategic decisions, stakeholder navigation, long-term planning. That's the promise.
But not everyone has the capability for Level 3 work. And not everyone wants it. And not everyone can develop it on the timeline the automation creates.
This is an uncomfortable reality that "reskilling" abstractions let organizations avoid. Some people currently doing Level 1-2 work will be able to transition to Level 3 work with development. Some won't. Making that assessment honestly requires distance from the immediate operational situation.
The Manager-Once-Removed is positioned to make that assessment. They can see which individuals have demonstrated potential beyond their current role. They can have honest career conversations that the direct manager (who needs the person to keep performing their current role until the automation goes live) cannot have.
Said another way: the MoR can tell someone the truth about their situation before the situation becomes a crisis.
The MoR Protocol for AI Implementation
Here's how this translates into practice. I call it the MoR Protocol.
Before Automation Begins
Before an AI initiative launches, the Manager-Once-Removed should assess every affected worker for their potential contribution at higher work levels. Not their performance at their current level (that's the direct manager's assessment), but their potential to operate at the next level.
This assessment should happen before the automation timeline creates pressure. It should result in a clear picture: Which individuals can likely transition to Level 3 work? Which need extended development? Which may need to transition to different roles entirely?
The MoR should also have direct conversations with affected workers about what's coming. Not the corporate messaging. The honest reality: "This is changing. Here's what I see as your options. Let's talk about which path makes sense for you."
These conversations are difficult. They're also respectful. Workers generally know what's coming. Treating them like adults who deserve honest information builds trust.
During Implementation
During the automation rollout, the MoR has career development conversations that the direct manager cannot have.
The direct manager is focused on making the automation work. They need the current team to keep performing while the transition happens. They're not positioned to have honest conversations about "what happens to you after this goes live."
The MoR can have those conversations. They can ensure that workers who are designated for transition actually receive meaningful development, not just training courses that check a box. They can intervene when transition paths aren't materializing.
This is active oversight, not passive monitoring. The MoR is accountable for outcomes, not just processes.
After Implementation
After automation is live, the MoR ensures that workers weren't just "reskilled" in name but actually developed into new contributions.
Did the workers designated for Level 3 roles actually transition successfully? Are they performing, or were they set up to fail? Do they need additional support?
What happened to workers who couldn't transition to higher-level work? Were they transitioned to other roles with dignity? Were they treated fairly in the process?
The MoR follows through. Accountability doesn't end when the automation goes live.
Why This Matters for AI Strategy
Here's the practical argument beyond the ethical one.
Implementations that ignore the human dimension create resistance. Workers see what's coming before leadership announces it. They start protecting information that would make automation easier. They slow adoption through a thousand small frictions. They leave for other opportunities before you're ready to lose them.
I've seen this pattern repeatedly. Organizations wonder why their AI initiatives face "cultural resistance." They hire change management consultants to improve communications. They create training programs to "address concerns."
None of it works because the actual concern isn't being addressed: Who's looking out for me?
Implementations that address the human dimension build trust. Workers become partners in transformation rather than obstacles to it. They contribute institutional knowledge that makes automation better. They stay during the transition because they trust they'll be treated fairly.
The difference isn't empathy (though empathy matters). It's structural accountability. Someone, by name, is responsible for individual workers. That changes everything.
The Complete Framework
This article completes the framework I've been developing:
- Why 95% of AI Pilots Fail: The level mismatch problem. Organizations deploy AI at the wrong levels of work.
- The Hidden Architecture of Work: Why AI has a ceiling. The cognitive operations required for Level 3+ work are structurally beyond current AI capabilities.
- The AI Deployment Map: Matching AI to work complexity. A practical framework for determining AI's role at each level.
- The Question Nobody Asks: Human accountability. Who, specifically, is responsible for the workers affected by automation?
Every AI framework addresses technology. Many address work design. Almost none address the human accountability gap that determines whether workers become partners in transformation or casualties of it.
The Manager-Once-Removed Protocol fills that gap. It applies Elliott Jaques' organizational science to ensure that someone, by name, is accountable for every worker affected by AI implementation.
That's not just the ethical thing to do. It's what makes implementations succeed.