Programs vs AI
Definition
Traditional software executes deterministic rules. AI predicts probabilistic outcomes. This difference changes how you deploy it, supervise it, troubleshoot it, and hold it accountable.
The Core Distinction
| Programs | AI | |
|---|---|---|
| Logic | If X then Y — always | Most likely continuation of X — usually |
| Failure mode | Crashes, errors, empty output | Confident, fluent, plausible wrong answer |
| Debugging | Find the bug in the rule | Find the pattern or context that misled the model |
| Deployment | Install, configure, update | Onboard, prompt, calibrate, supervise |
| Reliability | Consistent within defined scope | Variable across task types — see Jagged Frontier |
| Accountability | Clear — the code did X | Distributed — the model predicted X, the human reviewed it |
Why the Distinction Matters for Leaders
Leaders who treat AI like software make predictable errors:
“Set and forget.” Software can run unattended in a well-defined scope. AI operating on novel inputs in changing contexts needs ongoing human supervision. The context shifts; the model’s underlying training does not.
“If it worked once, it will work again.” Software behavior is reproducible. AI output on superficially similar inputs can differ significantly — and the model will not flag the difference. See Hallucination as Plausibility Optimization.
“Someone can configure it to do what we need.” Software requirements can be fully specified. AI capabilities are discovered through experimentation, not specification. You learn where the frontier is by testing it, not by reading documentation.
The Parenting Metaphor
Slide 48 of the workshop uses a memorable framing: programs are installed; AI is parented. This is not just rhetorical. It captures something real:
- AI must be onboarded with context, examples, and standards
- It improves through feedback and correction, not patching
- It reflects what it has been “raised on” — the quality and content of its training data
- It needs ongoing supervision, especially in high-stakes contexts
- It can develop bad habits if corrected inconsistently
The parenting metaphor shifts the mental model from “tool to deploy” to “system to develop and oversee.”
What to Pay Attention To
- Where AI is being supervised with the same oversight model as traditional software — which is insufficient
- Where AI failure is being handled as a bug rather than as a prompt or context issue
- Where accountability for AI outputs is genuinely unclear — human, system, or both?
- Whether the people running AI workflows have the skills to evaluate, correct, and improve its performance over time
Connections
AI as a Prediction Machine Transformer Architecture Hallucination as Plausibility Optimization Govern
Sources
- [inferred from workshop teaching — programs vs AI distinction is well-established in AI literacy literature]
Tags: AI literacy, supervision, programs, deterministic vs probabilistic, accountability