In recent years, terms like fake news and misinformation have slipped into the mainstream lexicon. Early on, “fact-checking” was a relatively modest enterprise. Outlets like Snopes were used to debunk email hoaxes and urban legends. Political fact-checking sites—like PolitiFact or FactCheck.org—functioned as narrow corrective tools, testing specific, easily verifiable statements in political speeches or interviews.
It looked something like this:
“Claim: AB made a speech in Dallas on December 17, 1995.”
“Fact Check: False. The speech took place in Austin in 1996.”
That was it. The fact-checkers stayed close to the text. They corrected discrete factual claims. They didn’t assume the role of omniscient narrators, and they didn’t presume to resolve debates over meaning, significance, or social context.
But modern “fact”-checking has expanded dramatically—both in scope and in authority. It no longer confines itself to verifying data points or isolating clear errors. Instead, it polices interpretation: evaluating whether a given claim has been framed appropriately, placed in the right political context, or treated with sufficient institutional deference.
Fact-checkers now act more like referees than librarians. They don’t just engage with what was said. They trace claims across platforms, select the most extreme versions, and judge them as part of a larger social pattern. They step directly into the arena, not to clarify, but to rule.
From Argument to Authority
This expansion has raised a key question: Do these new institutions of fact-checking solve any problems that good old journalism, analysis, or debate could not?
And more fundamentally: Is it even possible to preordain institutions that will tend to be “more right” over time—and if so, haven’t we already tried that? Isn’t that what newspapers, courts, and universities were supposed to be?
The answer depends on how we define the problem.
What Problem Are We Trying to Solve?
Some claim the core issue is the loss of a shared factual substrate: the “epistemic commons” that once allowed for political deliberation. But this loss is described differently depending on where you sit.
The “fake news” camp sees mainstream media as captured by entrenched financial and ideological interests. Journalists no longer investigate; they market narratives, often in lockstep with political elites.
The “misinformation” camp, by contrast, sees the rise of social media as uniquely dangerous, enabling liars and conspiracy theorists to hijack attention and bypass editorial oversight, flooding the zone with noise.
Each side believes the other is the main threat to public reason. And each believes that some kind of epistemic containment is now required: either a clean break from corrupt legacy media, or centralized moderation of digital discourse.
But what if both are describing something that’s not new at all? What if polities have always struggled over the construction of facts and values? What if this has always been messy? What if we used to have the language for that mess—and lost it?
We Used to Have Tools for This
We used to call these problems what they were:
Lies (now “disinformation”): deliberate falsehoods told to gain power or avoid consequence
Mistakes (now “misinformation”): incorrect claims, often made in good faith
Biases (now “malinformation”: distortions in judgment shaped by incentives or partial views
Agendas (now “malinformation”): structured preferences about what to emphasize or ignore
Each of these terms preserved a burden of proof. When you thought someone was lying, you had to show your evidence. When you spotted an error, you made your correction. When you suspected bias, you pointed to the structural interest at play.
And most importantly, you did it in public.
This created a kind of stable adversarial structure for claims. The result wasn’t always agreement, but it was legible. You could track the moves. You could compare arguments. And over time, reputations rose and fell not because a platform ruled one side out of bounds, but because observers watched and judged for themselves.
That process was imperfect. But it was visible. And it preserved the core idea that truth had to earn its authority.
The Real Problem Is Epistemic Drift
The new terms—fake news, misinformation, disinformation—don’t clarify. They obscure. They tell us less about what is wrong with a claim, and more about whether we’re allowed to engage with it.
They encourage epistemic outsourcing: instead of asking whether a statement is true or false, biased or incomplete, we ask whether someone “credible” has pre-judged it for us. And if they have, we’re told that any further engagement risks “amplifying misinformation” or “platforming harm.”
In this model, dialogue becomes dangerous. Correction becomes complicity. Refutation becomes endorsement. So the only acceptable move becomes silence.
This shift does not solve the problem of error. It makes error harder to detect, because once a claim is dismissed by category, no one is incentivized to check whether the dismissal was fair. Worse, no one is left to persuade those who think otherwise. Would-be interlocutors and truth seekers remain entrenched in their camps not due to stubbornness, but due to a lack of the sort of real interaction and dialogue that would change minds and build bridges.
What We Lose When We Replace Claims with Categories
By moving from lies, errors, and biases to “misinformation” and “disinformation,” we lose our method. Because when you think someone is lying, erring, or manipulating:
You engage them.
You make your case.
You bring receipts.
But when you think they are “misinforming,” you don’t refute them. You disqualify them.
The social consequence is clear: instead of persuasion, we get platform removal. Instead of open rebuttal, we get hidden throttling. Instead of visible correction, we get epistemic quarantine.
The epistemic result? We no longer sharpen our arguments. We curate our attention bubbles.
And the political result? The public sees this. And they don’t trust it.
What If the Problem Is Mostly Semantic?
What if we haven’t entered a new epistemic era at all? What if nothing fundamental has changed—except our language? What if the real crisis isn’t the rise of misinformation, but the fall of our confidence in reasoning through disagreement?
We already had the tools. We knew what to do about lies, errors, agendas, and bias. What we’re now being told is that those tools are no longer enough, that the new world is too complicated, too dangerous, too chaotic.
But the proposed solution isn’t epistemic; it’s aesthetic. We’re being asked to believe that trust should be centralized, not earned. That truth is best curated by invisible hands. And that participating in argument may itself be evidence of guilt.
The Way Back
We don’t need a new elite to tell us what’s true. We need the return of public reasoning in all its flawed, noisy, adversarial glory.
That means:
Describing lies as lies and proving the intent to deceive
Calling out errors as errors and correcting mistakes with evidence of our own
Naming and disclosing bias
Challenging and exposing agendas
It means refusing to let “misinformation” function as a rhetorical kill switch. It means preserving the right to challenge, to debate, to demand evidence. And above all, it means trusting truth to emerge through contact, not through containment.
Excellent! Tremendously important to get some clarity about what’s going on.