Knowing that work has levels is one thing. Knowing what to do about it is another.
The previous articles in this series established why 95% of AI pilots fail (level mismatch) and what creates that ceiling (the cognitive demands of work at different levels). Now we need to translate that understanding into something practical.
Here's the deployment map I've developed for matching AI to work complexity.
The Deployment Map
THE AI DEPLOYMENT MAP
Work Level AI Role Governance Human Role
──────────────────────────────────────────────────
Level 1 EXECUTE Light Spot-check
(days-3mo) Full Automated Handle
autonomy monitoring exceptions
Level 2 DRAFT Moderate Review &
(3-12 mo) AI creates, Sampling, approve
human escalation outputs
approves
Level 3 ASSIST Structured Decide with
(1-2 yrs) AI inputs, Mandatory AI input
human review
decides
Level 4+ INFORM Tight Own
(2-5+ yrs) Data only, Human-led entirely
human
integratesThe one-sentence summary for each level:
- Level 1: AI executes, humans spot-check
- Level 2: AI drafts, humans approve
- Level 3: AI assists, humans decide
- Level 4+: AI informs, humans own entirely
Let me unpack what this actually means in practice.
What Each Cell Really Means
Level 1: AI Executes
At Level 1, AI should operate with full autonomy. The work is procedural. Classify this document. Route this ticket. Extract these fields. Answer this FAQ. The task is clear, the criteria are defined, and AI can handle the entire workflow.
Governance at this level should be light. Automated monitoring, exception detection, periodic audits. You're spot-checking, not approving every output.
Human involvement is minimal by design. Handle exceptions when they surface. Investigate anomalies when they're flagged. But don't require human approval for routine operations.
Examples: Customer service triage (routing tickets to correct queues), document classification (sorting incoming mail or requests), data extraction (pulling structured information from unstructured inputs), FAQ responses (answering routine questions).
Level 2: AI Drafts
At Level 2, AI creates and humans approve. The work is diagnostic, and that involves connecting information in ways that require human validation.
AI generates the first draft. It synthesizes the reports. It identifies the patterns. It produces a recommendation. But a human reviews and approves before anything is final.
Governance is moderate. You're sampling outputs, not reviewing every one. You have clear escalation paths when AI outputs fall outside expected parameters.
Examples: Report generation (AI drafts the weekly summary, manager reviews before distribution), market analysis (AI synthesizes data sources, analyst validates conclusions), content creation (AI produces draft copy, editor refines), project planning (AI generates initial timeline, PM adjusts based on context).
Level 3: AI Assists
Here the relationship inverts. Humans decide. AI provides input. The work involves strategic judgment, and strategic judgment cannot be delegated to AI.
At Level 3, AI's role is preparation. Gather relevant data. Summarize prior decisions. Surface relevant precedents. Model scenarios. But the actual decision remains entirely human.
Governance is structured and mandatory. Every significant output gets reviewed. The review isn't just "does this look right?" It's "do I understand the logic and agree with the recommendation?"
Examples: Pricing strategy (AI models scenarios and competitor responses, executive makes the call), org restructuring (AI maps implications and identifies affected roles, leadership decides the structure), strategic planning (AI synthesizes market data and identifies trends, strategy team determines direction).
Level 4+: AI Informs
At the highest levels, AI is purely informational. It can surface data, generate visualizations, and provide context. But humans own the integration entirely.
Why? Because Level 4+ work requires holding multiple incomplete pictures simultaneously while making irreversible commitments under genuine uncertainty. AI can inform that process. It cannot participate in it.
Governance is tight and human-led. AI outputs are inputs to human deliberation, not substitutes for it.
Examples: Multi-year strategic direction (AI provides market data and trend analysis, executives integrate across business units), M&A decisions (AI supports due diligence, leadership weighs strategic fit), major capital allocation (AI models financial scenarios, board makes investment calls).
The Pattern Nobody Notices
Here's what I've found: as work level increases, AI autonomy decreases and human judgment increases.
This seems obvious when stated directly. But most organizations get it exactly backwards in practice.
They give AI extensive autonomy on Level 3 work. Ask AI to recommend pricing strategy. Let AI draft the go-to-market plan. Have AI design the org restructure. The work sounds sophisticated, so they assume AI should handle it.
And they require heavy approval processes for Level 1 work. Every ticket routing needs manager approval. Every document classification requires human review. Every FAQ response gets checked before sending.
Said another way: bureaucracy on simple tasks, chaos on complex ones.
I've come to understand this as the core implementation error. Organizations flip their governance gradient. They over-govern where AI is capable and under-govern where AI should be constrained.
The result? Efficiency losses at Level 1 (humans reviewing work AI handles reliably), and risk exposure at Level 3 (AI making judgment calls it cannot actually make).
Using the Map: A Four-Step Process
Here's how to apply this framework to any AI initiative.
Step 1: Identify the work level honestly
What's the actual time horizon for this work? Not the project timeline, but the discretion horizon. How far into the future do the consequences of getting this wrong extend?
Who is actually accountable? Not who touches the work, but who owns the outcome if it fails. That accountability level tells you the work level.
Be honest in this assessment. It's tempting to classify work at a lower level to justify more AI autonomy. But misclassification is exactly how the 95% fail.
Step 2: Match AI role to level
Once you know the work level, the AI role follows directly:
- Level 1-2: AI can execute or draft
- Level 3-4+: AI can only assist or inform
There's no negotiating this. It's not a matter of training data or prompt engineering. The cognitive structure of the work determines what AI can reliably do.
Step 3: Design governance accordingly
Don't over-govern Level 1. If you require human approval for every ticket routing, you've eliminated the efficiency gains while keeping the infrastructure costs. Let AI execute and spot-check the outputs.
Don't under-govern Level 3+. If AI is generating strategic recommendations without structured human review, you're exposing the organization to judgment errors that AI cannot detect in itself.
Match governance intensity to work level. Light at Level 1, moderate at Level 2, structured at Level 3, tight at Level 4+.
Step 4: Define the handoff explicitly
Where does AI stop and human start? This cannot be ambiguous.
For Level 1, the handoff is exception handling. AI processes until something falls outside parameters, then escalates.
For Level 2, the handoff is approval. AI produces a draft, human reviews and approves.
For Level 3, the handoff is decision point. AI provides inputs, human makes the call.
For Level 4+, the handoff is integration. AI provides information, human integrates it with everything else they know.
Write these handoffs down. Make them explicit. Train people on when and how to take over from AI.
Common Mistakes
Let me share what I've seen organizations get wrong most often.
Classifying work at the wrong level. Usually downward. "This is just data synthesis" when it's actually strategic judgment. "This is just report generation" when it's actually diagnostic analysis. The reclassification justifies more AI autonomy than the work actually supports.
Assuming better AI fixes level mismatch. GPT-6 won't make Level 3 work into Level 2 work. The levels are about cognitive structure, not capability. Waiting for better models is avoiding the real implementation question.
Designing governance backwards. Heavy process on simple tasks, light oversight on complex ones. This is astonishingly common, and it happens because governance gets designed by risk-averse teams who don't understand work levels.
Ignoring the handoff. AI deployment without clear escalation paths. AI generates recommendations with no defined human review point. The handoff exists whether you design it or not. Better to design it intentionally.
But Here's What the Map Doesn't Cover
The deployment map handles work. It helps you match AI to tasks in a way that respects the cognitive structure of what you're trying to accomplish.
But it doesn't address the human dimension.
Maria in accounting is about to have her job automated. The deployment map tells you AI should execute at her work level. It doesn't tell you who is accountable for her transition. It doesn't tell you who explains the change to her team. It doesn't address what happens to her career.
That's where most implementations truly fail. Not on the technology. Not even on the work level matching. But on the question nobody asks: who is responsible for the humans affected by this automation?
That's the final article in this series.