In a recent essay, "Come On, Obviously The Purpose Of A System Is Not What It Does," Scott Alexander pushes back against the slogan that the purpose of a system can be inferred from its operational effects:
When people do list a specific example, it’s almost always a claim that, if you’re unhappy with any result of a system, the system must have been designed by evil people who were deliberately trying to hurt you, and so you should become really paranoid and hate everyone involved.
The charitable reading of this quote is that he is being sardonic and hyperbolic for rhetorical effect, not that he is actually strawmanning the POWSID perspective as a leap directly from “bad result” to “evil overlords,” skipping the much more reasonable middle position that “maybe Reform X is resisted because the system in fact prefers the current balance between success and failure, even as it professes otherwise.”
The object of POWSID-style critiques is not a system’s failure to reach its goals, but the pattern of behavior enacted in their name. The point is that many systems present a “motte and bailey” structure, in which they tell the world that they care about Goals A, B, and C, and nothing will stand in their way (the motte) but, in practice, are allowing unstated Goals L, M, and N to trump A, B, and C in almost every instance of conflict (the bailey). What they do is not consistent with what they say they are doing. Yet when the gap is noticed, defenders scramble back to the high ground, insisting that they are still fighting valiantly for the noble goals. They offer reasonable-sounding explanations: of course anyone pursuing such lofty goals might be stymied by complexity, opposition, or circumstance. But this is a dodge; it shields the system from scrutiny over whether its actual behavior aligns with its professed aims. The question is not whether Goals A, B, and C are hard to achieve; let’s stipulate that they are. The question is whether genuine pursuit of Goals A, B, and C would be expected to entail Behaviors X, Y, and Z at Rates G, H, and I.
But I’m getting sidetracked. My intent was not to defend POWSID as such. (Or maybe it was? I did it, after all.) I wanted to explore another domain where the slogan applies even more forcefully, and with even less excuse: heuristics.
A heuristic is what it does.
And unlike systems, which must wrestle with resource constraints and conflicting goals, heuristics face no such burden. They are pure epistemic instruments. If they fail, they fail entirely on their own terms.
Heuristics are sold to us as modest aids to thought: rules of thumb, gentle cautions, reminders to check ourselves. They are marketed as epistemic training wheels: “When you hear hoofbeats, think horses, not zebras”; “Extraordinary claims require extraordinary evidence”; “Be skeptical of unpopular or conspiracy-like ideas.” In principle, these tools should sharpen inquiry. They are supposed to provide cheatsheets for known cognitive biases without replacing the work of reasoning itself.
But like systems, heuristics must be judged by their operational behavior, not their aspirational definitions. And when you observe how heuristics are actually deployed, a consistent and disturbing pattern emerges. They are often not modest nudges but powerful distortions.
Conflicting heuristics are not explicitly weighed or reconciled. They are black-boxed. Suppose one heuristic suggests caution toward unpopular ideas, while another tells you that you should pay attention to those with deep field-specific expertise and credentials. And then along comes a dissident immunologist who has formulated a working hypothesis about a particular mechanism of vaccine harm. A truly thoughtful reasoner would notice the conflict, surface it explicitly, and ask how the competing heuristics should interact in the particular case. But in practice, the conflict is submerged. A judgment is rendered, often based on hidden social or emotional cues, and the winning heuristic is retrofitted to justify the decision.
In these moments, heuristics cease to be guides and become post-hoc rationalizations.
Similarly, heuristics presented as gentle nudges are often operationalized as total vetoes. Take the principle that unpopular claims should trigger heightened scrutiny. In theory, this calls for careful questioning: more curiosity, not less. But in practice, the invocation of “unpopular claim” is treated as a reason to disengage entirely. The heuristic functions not as a yellow light inviting caution, but as a red light forbidding inquiry. A tool meant to invite more careful investigation becomes a socially acceptable excuse for epistemic closure.
Finally, the social incentives surrounding heuristics are rarely acknowledged. Heuristics are deployed selectively, not according to some neutral algorithm, but according to which ideas are socially risky, emotionally charged, or institutionally inconvenient. In this way, heuristics function as moral armor: they allow the user to preserve a posture of rational sophistication while executing acts of exclusion and dismissal that are driven by fear, disgust, or conformity.
This gap between the stated and operational role of heuristics mirrors the motte-and-bailey rhetorical pattern. The “motte” is the modest, defensible claim: “We should be a little more cautious.” The “bailey” is the aggressive operational move: “This claim is disqualified without serious examination.” When challenged, defenders retreat to the motte; but the real work is done in the bailey.
Thus, heuristics, like systems, must be judged by what they actually produce. A heuristic that regularly shuts down inquiry is a tool of epistemic suppression, no matter how delicately it is described. A heuristic that encourages rationalization is not a modest guardrail; it is a black box that launders motivated reasoning into the appearance of rigor.
And the consequences are not merely abstract. The misuse of heuristics shapes real discourse and institutional behavior. Scientific communities that apply the “extraordinary claims” heuristic too aggressively may fail to detect early evidence of paradigm shifts. Political communities that blackball unpopular ideas may entrench injustices under the guise of caution. Medical systems that refuse to interrogate outlier cases may institutionalize preventable harms and diagnostic failure. Every time a heuristic is misapplied as a shield against discomfort rather than as a spur to more careful reasoning, we lose something: the possibility of seeing the world more clearly.
If we take rationality seriously, we must evaluate heuristics not as slogans but as behaviors. We must ask not “what was the heuristic supposed to do?” but “what does it actually do, when applied?” Otherwise, we risk substituting rituals of reasonableness for the real thing.
A heuristic is what it does. No more, no less.
For more reading on heuristics, see:
The Conspiracy Heuristic Is a Bug
There’s a widespread habit of dismissing claims based on their resemblance to what we’ve learned to call “conspiracy theories.” This habit is so deeply entrenched that many treat it not just as a social instinct, but as a mark of epistemic hygiene: a way to protect the mind from error, noise, manipulation, madness.
The Heuristic That Misheard Itself
The maxim “When you hear hoofbeats, think horses, not zebras” is widely used in clinical settings to guide diagnostic triage. It is a heuristic grounded in base rate reasoning: given that common conditions are more prevalent than rare ones, initial diagnostic hypotheses should reflect this distribution. It reflects a plausible Bayesian orientation: in c…