Jagged Frontier
Definition
AI capability is not a smooth gradient from weak to strong — it is uneven. AI may perform impressively on tasks that seem hard while failing on tasks that seem easy. This unevenness, named the “jagged technological frontier,” means leaders cannot decide AI use by task category alone.
The Research Finding
Dell’Acqua et al. (2023) ran a field experiment with consultants at a major firm using AI assistance on knowledge work tasks. Key results:
- Consultants using AI outperformed non-AI users by 17–43% on tasks inside the frontier — tasks where AI genuinely helps: analysis, synthesis, structured writing, idea generation.
- Consultants using AI underperformed non-AI users on tasks outside the frontier — where AI produces plausible-sounding but wrong answers, and humans relied on it instead of their own judgment.
- The danger: AI does not signal when it has crossed from its competence zone into its failure zone. It sounds equally confident on both sides.
Why This Is a Leadership Problem
The jagged shape creates a specific trap. A leader observes AI performing well on several complex tasks and forms a mental model: “This is good at hard things.” They then delegate an apparently similar task — and the AI fails in ways that are hard to catch because the output looks credible.
The practical meaning: confidence is not the same as reliability. AI is not simply good or bad. It is uneven. The leadership discipline is learning where it helps, where it harms, and where human supervision is non-negotiable.
The Car Wash Analogy
A useful teaching example: imagine asking someone to evaluate a car wash. They observe the car entering, the machinery running, the car coming out clean — and confidently conclude “the system works.” But they do not notice the small scratch on the rear door, because they were not looking for it. AI output can look complete and correct while containing a specific, consequential error that the human reviewer was not primed to catch.
What to Pay Attention To
- Which tasks in your workflows look similar to tasks where AI performed well but are actually outside the frontier?
- Where has AI been deployed based on early positive results, without systematic testing of its failure modes?
- Where do humans assume AI output is correct because it sounds authoritative?
- Where is the review process calibrated to catch AI errors — or not?
Connections
Hallucination as Plausibility Optimization AI as a Prediction Machine Human Agency Scale Protect AI Strengths and Human Strengths
Sources
- Dell’Acqua et al., Navigating the Jagged Technological Frontier: Field Experimental Evidence, Harvard Business School, 2023
- HBS - Navigating the Jagged Technological Frontier
- Harvard D3 - Navigating the Jagged Technological Frontier
- Mollick - The Shape of AI
Tags: jagged frontier, task fit, AI limitations, hallucination, trust, human oversight