Human Agency Scale
Definition
The Human Agency Scale (HAS), developed by the Stanford SALT Lab WORKBank project, measures the level of human involvement workers prefer for a given task — from full AI automation to full human control. It separates technical feasibility from desired human agency.
The Core Finding
Workers consistently want more human involvement than AI experts judge technically necessary.
- 47.5% of workers wanted more human involvement than experts judged technically necessary
- 16.4% wanted two or more levels more human involvement than the expert recommendation
- H3 — equal partnership between human and AI is the most common worker-preferred level, across 47 of 104 occupations studied
What the Scale Means
The HAS has multiple levels ranging from full AI autonomy to full human control. The most important insight is not which level is “correct” — it is the gap between what can be automated and what should be automated given the human meaning of the work.
A task may be technically automatable. That does not mean people want it fully automated. Workers may still want review, partnership, control, explanation, or final decision rights — and those preferences are legitimate design inputs, not resistance to overcome.
The dominant preferred level — equal partnership — means most workers are not opposed to AI. They want AI as a collaborator, not a replacement.
Leadership Implication
When organizations automate beyond the level of agency people consider legitimate, resistance is not a communication problem. It is a design problem. The fix is not a better change management campaign. It is recalibrating the autonomy level to match what the work actually requires.
The practical test: “Do not automate just because AI can. Match the AI strategy to the level of human agency the work requires.”
This also explains the Klarna pattern: the company reduced human customer service agents dramatically, then had to rehire as AI interaction quality degraded without human judgment in the loop.
What to Pay Attention To
- Where the gap between expert judgment and worker preference is largest in your organization
- Where automation decisions were made without consulting the people doing the work
- Where “resistance” is actually legitimate agency preference that was never surfaced
- Where the equal partnership level (H3) could be the right default before moving higher or lower
Connections
Adoption Gap Tasks vs Jobs Hybrid Human-Agent Teams Autonomy Levels for AI Agents
Sources
- Stanford SALT Lab, Future of Work with AI Agents / WORKBank — primary source for HAS statistics
- Stanford WORKBank Paper - Future of Work with AI Agents
- Stanford SALT Lab - Future of Work with AI Agents
- SALT-NLP WORKBank Dataset
Tags: human agency, automation, adoption, WORKBank, Stanford