Error is an inevitable part of information-sharing. But when we disseminate error, whether through misinformation or with ill-intent, we risk damaging our discourse. Here's what to look out for in false forms, and how to defend against them.
When playing broken telephone as a child, the game feels harmless. Isn’t it funny how you can whisper one word into one ear, and by the time it’s been passed down a chain of players, the word has changed?
The game becomes less funny when you realize that the wrong message about something of great importance can do great damage in the world, whether accidentally or on purpose.
But inaccuracy is complex. We all perceive the world differently. Words are loaded based on cultural context and personal experience. Visual and sonic illusions work because our sensory equipment varies. Yet we often go to great pains to gloss over obvious differences, to believe that holding a purely objective understanding of anything is feasible. Ironically, this conviction can then leave us ill-prepared for a deeper understanding of the real world. The fact that a visual illusion like “the dress” shocked us at all is more concerning than the mere fact that some saw white and gold, and others saw blue and black.
Why? Because there are consequences to holding too-rigid expectations of “truth”. When we forget human variability and fallibility, we don’t just leave ourselves open to deception. We set ourselves up for disillusionment when, inevitably, something we’ve learned needs to be corrected.
This is why talking about the many forms of error that show up in news and social media is a delicate task. It’s easy for the mere existence of error to be treated as just cause for turning away from whole institutions. But that’s setting the bar out of reach: Every human enterprise will perpetuate false data from time to time. What matters isn’t perfection. It’s understanding intention, mitigating broader impact, and deciding how to handle false data after discovery.
Building our toolkit for media literacy
Misinformation, disinformation, and malinformation abound in media, government, and everyday interactions, but they’re not the only misleading games in town. We also have to deal with truthiness and just-so stories. Together, this “fab five” of false forms can mislead us into thinking we have a greater lock on the facts than we do.
Misinformation—incorrect information that is not necessarily intentional, such as an honest error—is perhaps the most unassuming of the Fab Five, but it is dangerous. A typo can transform the meaning of a final report, which might then be relied upon when setting policy. This could prove disastrous, until someone revisits the original document and catches the error. We can also mishear a colleague’s statement, and draw inaccurate conclusions that cause tremendous social-group chaos, with zero ill intent.
But misinformation is also an inevitable part of the journalistic process, especially in an era of gutted newsrooms. To offset the reduction in paid fact-checkers, we have to cultivate a sense of collaborative correction, and be wary of any venue that does not openly acknowledge its structural vulnerability to inaccurate data.
Journalism is called the “first draft of history” for a reason. When an article reports on late-breaking news, there is always the possibility of later intel proving initial reports inaccurate. The problem is, humans have a bias toward initial reports, and struggle to accept modification or even explicit retraction of earlier data.
Any publication that fails to use language highlighting the novelty and fragility of its intel, especially for breaking reports, does a disservice to its audience.
This is not hypothetical. Those who encourage “false flag” readings of major tragedies rely in part on the inevitable chaos of initial proceedings, when reports will necessarily vary due to the inconsistency of human observers under duress. Similarly, although we generally understand the challenge of acquiring accurate data during an active conflict, the reality of reporting on stats released by Ukraine and Russia have complex propaganda politics attached. We’re all working with the best data we have at the moment. But we don’t have to do so credulously.
To defend against misinformation’s negative impact, even if we cannot hope to prevent it in all cases, we have to strive to share information with appropriate confidence intervals for its reliability, along with explanations of where, when, and from whom to expect further updates. When we stop acting like any of us is ever providing the last word on a subject, we train ourselves to stay on the lookout for improved data sets.
Disinformation, on the other hand, is false information disseminated by actors who know it to be false. This includes false information disseminated with good intentions: from a paramedic telling someone they’ll be fine to keep them from dying in terror, to a country at war downplaying local death counts to keep up morale and avoid emboldening the enemy, to a partner telling their spouse that they look better in those shorts than they actually do.
But in practice, the term has a negative connotation because even good intentions can very easily prove damaging. In 2020, Chief Medical Advisor Anthony Fauci’s decision to downplay the value of wearing masks early in the COVID-19 pandemic was fraught by conflicting motivations and data. Fauci later claimed that he was sensitive to the limited supply of masks and wanted to protect the supply to frontline professionals. But in trying to prevent a run on these materials, his initial statements diminished trust in public health officials throughout pandemic.
Other forms of disinformation are far more insidious. When the express intent is to introduce false data to manipulate public sentiment, sow doubt in public institutions, attack individuals, or otherwise control circumstances for personal gain, we’re dealing with a threat to the foundation of a healthy democracy. We can try to call out bad actors for disinformation, but many see public discourse as a game they know very well how to play. Often, a call-out just gives them more of the attention they wanted in the first place—which is why bad actors are especially excited by the opportunity to “debate” a topical expert around their disinformation.
But the negative impact can be lessened if we don’t try to play the game on their terms. Disinformation must not be signal-boosted and platformed in ways that invite viewing the content as merely controversial. Lies are not controversial: They are lies. Depending on the platform, we can seek legal pathways for the removal of some disinformation. Mostly, though, we need to focus our efforts on amplifying accurate information on its own—not as if it always needs to be mentioned in the same breath as false data, even to correct the latter.
Don’t give disinformation the media-ecosystem oxygen its creators crave.
Malinformation is an even bigger challenge: something that is technically true but is used to do harm. If a political figure is facing criminal charges, someone might attempt to change the news cycle by bringing up an opposing figure’s messy divorce. The complex facets of an opponent’s personal life might well be true, but are they relevant to the discourse, or simply being invoked to humiliate and distract?
Malinformation is so common that many are now rightfully skeptical whenever something sensational floods their news cycle. When hearsay recently emerged around the possibility of non-terrestrial materials kept under wraps by US government agencies, some on social media proposed a different conspiracy: an attempt to deflect from talking about the economy.
Although malinformation is generally seen as information shared to do harm to an individual, organization, or country, it is also used to harm public discourse. Anything fact-based that has been invoked to derail us from active, democratic conversation on public or semi-public forums, or to besmirch the character of a speaker to distract from the content of their speech, falls into this difficult category.
It also bears noting that the distortion of an accurate fact can turn malinformation into a form of disinformation: for instance, when one person’s far lesser crime is treated through widespread media attention as equivalent to the greater crime of another. In this case, it remains true that the first person also transgressed, but the invocation of this fact rises to the level of disinformation because a fact about the crime itself (its severity) has been changed in the telling.
How do we combat the spread of malinformation?
Pointing out when we see it in use helps. So too does refusing to react the way that bad actors—ever so cynical about human nature—expect from us. Malinformation only works when we buy into sensational storytelling in general. When we demand better by asking “So what?”, and when we refuse to partake in social-media reactions that keep malinformation trending, we take back our democratic power.
So, no more signal-boosting malinformation itself.
When you see our focus derailed, call out the act of derailment.
Then help turn the conversation to what matters instead.
Truthiness can feel like a harmless part of the Fab Five. Who among us hasn’t repeated something that has the air of being true without checking their references? But truthiness, as first defined on The Colbert Report in its modern context, often emerges in political speech and punditry, two discourses where hard data goes to die. Because most people rely on others to turn facts into narrative, rather than poring through all the raw data themselves, the use of “truthy” statements in summaries of sociopolitical, economic, or scientific matters can easily mislead, without necessarily being a case of mis- or disinformation.
Truthiness is traditionally defined as “the quality of preferring concepts or facts one wishes to be true, rather than concepts or facts known to be true.” But that notion of wishing versus knowing is muddled in practice. Wishing suggests that, deep down, a part of us knows that a given statement isn’t really true, when that’s not exactly or always the case. A more inclusive description would be that truthy statements feel self-evidently true to such an extent that we either don’t bother to falsify them, or don’t bother to source them at all.
There’s an important difference between those two actions. It is much easier to find a quick link to support a truthy statement than to test the premise by seeking out counter-evidence that might disprove it. But when we seek to combat the role of truthiness in our data analysis and political discourse, we should not be striving for the lowest level of informational rigor.
Rather, we need to get into the practice not only of side-eyeing sweeping declarative statements that lack citation, but also of side-eyeing such statements even if they come with citation. The gold standard for robust commentary on hard data should be context-specificity, and a clearly conveyed awareness of the same data’s limits.
Just-so stories take truthiness to the next level. These are narratives that also have a common sense feel to them: so much so that why would we bother to check our facts? Traditionally, just-so stories tried to explain the origin of a phenomenon, as in folktales about how the elephant got its trunk or the giraffe its long neck. In contemporary culture, just-so stories are often invoked to explain a given economic, political, or sociological reality. The evidence underpinning their initial construction may have long since turned another way, but they still feel right, so they stay.
These aren’t quite acts of malinformation, but part of a broader fabric of falseness onto which we can all too easily pin perfectly accurate data. If a study emerges that fits the story we’ve already been telling ourselves about how a given demographic acts, or where the world is currently heading, or how we got to where we are, confirmation bias means that we are much more likely to accept that study as true. The just-so story is often the backdrop against which confirmation bias takes place.
Just-so stories are extremely difficult to root out, because whether they’re geopolitical, racialized, gender-specific, creed-based, familial, or philosophical, they often do significant work in shaping our sense of self and our relationships with the surrounding world. To call out just-so storytelling in media cycles or on political campaign trails, or to weed it out of our own discursive practice, is a much more intricate task than simply identifying a given data point as mis-, dis-, or malinformation. We could conceivably spend the rest of our lives striving to overcome all the just-so stories in our communities.
But there is one greater benefit, at least, to grappling with the just-so stories in our news reports, social media chats, and ourselves. Once the broader “just-so”s have been named and questioned, the power of other false data might diminish. After all, we’d be far less susceptible to the negative impact of mis-, dis-, and malinformation if not overeager to adopt whatever new data fits our grand narrative best. We’d also be less likely to take the “truthiness” of more specific claims serious, if already trained to seek out corroborating evidence (or falsification) in general.
The word “fab” comes from “fabulous”: often flattering, but also referring to something unreal, having no basis in reality. That’s a touch extreme in the case of these “Fab Five” false forms of information. Some of the hardest errors to overcome are challenging precisely because they have some basis in truth. But the use of such a sensational linguistic flourish matches form with function: we need to remember that misinformation, disinformation, malinformation, truthiness, and just-so stories all have a role to play in undermining our best efforts to communicate with integrity, in pursuit of greater human agency and a healthier civic life.
We will all fall victim to false data from time to time, especially with the number of bad actors intent on weaponizing information exchange to personal, political, or otherwise profit-driven ends.
How quickly we recognize false data, and how well we respond to it when we do, will determine the power those bad actors have over us all.