Overview:
Anxieties over AI "deep fake" technologies aren't new. We've struggled with crises of authority and "truth" throughout modern human history. Here are a few, and what we can do to avoid errors in reasoning.
White women have a storied history of fabricating assault at the hands of BIPOC men. On March 14, 22-year-old Eleanor Williams became part of a small subset of people to face legal consequences for false claims of abuse, which here sparked a season of hate crimes and drove three men to attempt suicide. Her elaborate story involved the use of multiple online accounts to pin blame for photos of her battered body on a series of locals, white and BIPOC alike, but most severely on Mohammed Ramzan, a local businessman whom she falsely claimed had trafficked her in Amsterdam. Ramzan wasn’t the only Asian man affected; while he received hundreds of death threats and had to leave his home town of Barrow, Cumbria for a few months, other Asian-owned businesses in the 97 percent white community endured property damage and death and rape threats, too.
Williams has been sentenced to 8.5 years for “perverting the course of justice”.
Meanwhile, the town that switched into emboldened racism after her social media posts, donning Justice for Ellie merchandise and launching solidarity protests in conjunction with attacks on property and threats to personal safety, is reeling at its own actions: its eagerness to take violence upon itself instead of awaiting the due process of a formal investigation. Also (and more pressingly) reeling are the victims themselves of these hate crimes, including the white men smeared in William’s statements, their homes vandalized and their lives upturned. So too are any victims of sexual violence who will now be more hesitant to come forward, lest they be treated with even greater suspicion after Williams’ false accusations.
Pursuing ‘truth’ in a society
In the world of atheist and theist debate, much is made about the importance of differentiating between empirical claims and everything else. In the broader world, a much more complex set of human behaviors underpins what we decide is true.
In early 2020, when the US Center for Disease Control claimed that masking wasn’t necessary (while strategically discouraging a run on masks needed by medical health professionals), its actions as an ostensibly science-driven agency contributed to a huge epistemic schism in our approach to public officials. Should one always support an organization representing formal research, irrespective of politics? Or only when what the organization says amplifies one’s own view of what is true?
The CDC confidence crisis is one of many in our current world, which is both over-saturated in data and lacking in consensus on its authorities. The COVID-19 pandemic, which involved routinely adjusted intel as scientists and medical professionals learned more about the disease and human reactions to it, was simply an excellent litmus test of our overall scientific literacy. We unfortunately did not pass.
Today, confidence in many public health institutions remains low not because of how frequently guidelines changed, but because of profound disconnects between the knowledge we had, and how knowledge was performed by certain public officials. “Trust the science” is never as effective in a rapidly evolving crisis as “Trust the scientific method as its built-in falsification and replication mechanisms provide long term correctives to initial data”—but the former sure is catchier. It is extremely difficult to get people to recognize that science is not a static body of knowledge, so much as an aggregate of best-fit results to date.
Nevertheless, on some level we recognize that scientific data is filtered through human actors, because people across the political spectrum are highly attuned to recognizing researcher bias in specific fields. (Rarely the same fields, of course. We all have our priorities.) But therein lies a related crisis of authority: We are all more inclined to accept that which supports existing views, and more resistant to ideas that do not. We also have a huge bias toward initial intel, and against retractions or corrections (although recent research suggests that we can mitigate what’s called “belief perseverance bias” with prebunking and debunking strategies).
In other words, we are highly impressionable, yet put great stock in our ability to “reason” at a remove from cultural conditioning. This creates significant epistemological variation even in the same general population. One approach to public health institutions after the CDC’s failings will seem highly logical to one subgroup, while another approach will seem equally logical to another, depending on which epistemic claims each group valued prior to that inflection point.
How does one navigate a world filled with people so often and so easily divided along epistemic lines? One might think that atheists and people with minority religious beliefs would be better prepared for this challenge, but our critical thinking is often silo-ed to cosmological debate, leaving us susceptible to crises of “truth” in other realms. As the nonreligious population rises, though, it becomes more difficult to chalk up fundamental epistemic divisions to mere differences in cosmology. When so many of us are split on what constitutes “authority”—in science, in legal affairs, in governance—the threat to our sense of social order and collective identity remains just as existential with all spiritual matters set to one side.
Humanist thinking can help, but it’s not easy. Plenty wrongly assume that the height of humanist praxis is getting everyone to a state of full agency so that, of course, they’ll all then come to the same conclusions about every major social issue: all settle, that is, on the same moral peaks in our vast human landscape.
Therein lies a road to folly, for creatures all shaped by distinct experiences of the world. But thankfully, it’s a road we’ve been down many times before.
The current AI panic over ‘truth’
With machine learning in the news through sensational products like OpenAI’s GPT-4, the question of “truth” is again a matter of public debate. In a world where visual and text-based content can easily be fabricated, how will we protect against bad faith actors sowing propagandist rhetoric or outright lies into our histories, our debates, and our democratic chores? What will it mean to live in a world where it’s easier than ever to stage fake videos, craft humiliating online content, and weaponize popular media against whole groups of human beings?
The trick lies in the question: “easier” is a comparative. “Easier” reminds us that we’re not actually dealing with a novel problem.
Indeed, we should probably be more concerned by how often our media inclines us to think that threats from new technology are somehow new themselves. This narrative relies on false nostalgia, a misrepresentation of history to pretend that there has ever been a time without data manipulation, and without the extreme dangers to health, livelihood, and cultural cohesion that emerge from it. And yet, as the aforementioned Williams case highlights, humans have never wanted for the means to lie, and to cause immense damage with those lies.
A more accurate, if also more disconcerting way of viewing history is this: We have always been grappling with the question of what is true, and whom to believe. We have always been drawn together by only the loosest and most fragile of social consensuses around certain truths, and certain truth-speakers. And as exhausting as this might seem, historical precedent with related challenges also arms us with plenty in the way of possible recourse, to help brave the latest waves of bad-faith actors whose actions now pose the greatest threats to public discourse.
If history has shown us anything, it’s that so long as there are new humans, there will always be new opportunities to lose whatever we gained through preceding struggle.
Wagging the dog, over centuries
In 1997, Wag the Dog offered a version of our panic about new technology, through its “comedic” story of political wranglers who fake a whole war to distract US voters’ attention from a government sex scandal. At the time, green-screen tech was the major site of future shock; if you could film something happening against any old backdrop in a studio, what was going to stop people from faking world events?

But even then, the film relied on a deeper and more longstanding cynicism around mainstream media: the knowledge of how easily a paper or news broadcast could be made to toe a given party line. This was during the rise of whole networks filled with shock-jockeying pundits and a clear political cant to their notion of “truth”. In the battle between serious news (as better befits a serious democracy) and viewer entertainment, “infotainment” wins out whenever corporate actors control the game.
In 1976, Network made a similar argument. When its valiant, long suffering protagonist tried to rail against the prescribed way of talking about the world given to him by corporate powers, he ended up creating a form of outrage-based news media that corporate loved, and leaned into instead.
Similarly, in 1962’s The Man Who Shot Liberty Valance, John Ford took us way back to the Wild West of the US, and there showed how quickly a story, through credulous news coverage, can spread like wildfire, irrespective of its truth.
Such filmic outings attest to a deep history of false reports. But what’s shocking about that? Humans throughout history have often acted on less than full evidence: on conviction, more than rigorous empirical data. In the US alone, we know that rumors and territorial motives drove witch trials, persecution of immigrants and minorities, and lynchings especially of Black men. We know that when the massive “Weapons of Mass Destruction” fraud was perpetrated by US government, to justify a war that shattered Iraqi and US lives, major newspapers failed to do due diligence because popular fervor made the “truth” less important than being seen as patriotic.
Trusting in the Platonic ideal of a society
There are some who argue that “truth will always out”. It certainly did in the case of the Rwandan genocide of 1994, which prompted the country to create an annual period of remembrance, and to implement extensive truth and reconciliation processes where everyone’s experiences of the trauma and loss could be heard. Likewise, the end of the Holocaust in Nazi Germany marked the beginning of almost eighty years now of social reckoning and archival work.
But that “truth” didn’t come in time for hundreds of thousands of Rwandans. And it didn’t come in time for the six million Jewish people killed under the Third Reich.
Charles Darwin was haunted by his own realization of nature’s immense brutality, which leaves so many simply slaughtered along the way to the world we live in now. His sentimentality comes out in On the Origin of Species, as he tries to grapple with all the losses incurred in the process of natural selection:
When we reflect on this struggle, we may console ourselves with the full belief, that the war of nature is not incessant, that no fear is felt, that death is generally prompt, and that the vigorous, the healthy, and the happy survive and multiply.
We may indeed console ourselves with such platitudes, but there is a danger to thinking that, as Martin Luther King, Jr. famously said, “The arc of the moral universe is long, but it bends toward justice.” Much as the idea might have been intended to hearten us in the fray, it can also just as easily leave us overconfident in a universe that will naturally incline itself toward less suffering over time.
If history has shown us anything, it’s that so long as there are new humans, there will always be new opportunities to lose whatever we gained through preceding struggle.
Embracing the endless challenge of ‘truth’
Do humanists do enough to address the fragility of knowledge in everyday practice? Do we wrestle thoughtfully with today’s most pressing epistemological challenges? Or is the “god” question such an easy target that we prefer it to the mess that ensues when talking about other issues in the public sphere?
To grapple effectively with questions of truth and authority, we need to be willing to adopt difficult positions, identify our own points of ideological resistance, and reframe how we engage with people who do not share our points of view. For instance:
Do we defend specific speakers or institutions above the ideals for which they ostensibly stand? Why? What tribal fealty keeps us from being able to admit when a given speaker or institutional stance is in error?
Conversely, do we fall prey to the opposite extreme, and lose confidence entirely in a given institution or speaker in the wake of certain errors? Why? What idealistic threshold are we setting on the creations of our fellow human beings?
When faced with a risk of fake content from new tech, what operating principles in our data-gathering processes are ultimately at risk? Can we develop ways of moving through the world that aren’t as reliant on perfection from the latest trending news?
When confronted with an act of violence, something that shocks our sensibilities and compels immediate action, what checks and balances can we put into place, to ensure that we’re not charging into error, and doing further damage as we go?
And perhaps most critically, how do we prevent inaction from being weaponized as the only “logical” response to the risk of inaccurate data? What generally proactive stances can we take in the world, to keep society moving toward an overall lessening of violence without tethering our activism solely to the latest outrage?
Human frailty, and its systems
Human beings are susceptible to following charismatic figures, cults of personality, and generalized groupthink. Worse still, we each grow up with strong senses of inner conviction that incline us to overconfidence in our personal ability to reason well: to weed out bad intel and elevate only the good.
And never are we more dangerous to one another than when we forget these facts, and the sheer animality of our natural condition.
Exercises even like this article merely attempt to rally others around one idea of truth, but that idea will never map smoothly onto the whole of society, because “society” itself is an ideal: a loose skein of presumed shared values that we set over fellow human beings, hoping that those values will hold where it counts.
When an institution or a person breaks from those presumed shared values, it can destabilize the whole exercise. Going forward from the break, schisms form over what kind of claim is to be believed. If the schisms grow wide enough, greater cultural rifts—even physical, informing the creation of new borders—might emerge. But mostly, we’ll be left to live somehow in close proximity to profoundly disparate beliefs.
In “The Myth of Sisyphus”, Albert Camus answers humanity’s ache for meaning in a “silent” cosmos by imagining a middle position between believing in platitudes and killing oneself because the universe holds no meaning. Instead, one must imagine Sisyphus, the man condemned to futile struggle, happy. And not “happy” because that is a natural state for survivors (as Darwin tried to console himself with), but “happy” as a kind of revolt, an absurdist refusal of the conditions set by a silent cosmos.
Similar might be called for as we invoke notions of shared truth and collective striving for a less violent world ahead. Long before today’s machine learning technologies posed fresh epistemic crises for news media and our democracies, we have had plenty of other brutal reminders that group behavior shapes human applications of “truth” far more than hard evidence ever will.
The rock before us is large and very old, and it is as likely to fall backward as to move us into kinder terrain. Whether or not this situation leaves us laughing or crying, the work itself remains. We will always need as many hands pushing in a similar direction as possible, if we ever want to live to see less violent days.