Long-term thinking and reducing waste when donating are good impulses. But longtermism, as the theory has recently been advanced by many tech magnates, can be a mess. Secular folks are no less susceptible to self-serving fallacious thinking, so how do we avoid the worst of it?

Reading Time: 10 minutes

In 2004, a tsunami and earthquake killed almost 230,000 people in 14 Indian Ocean countries. Many forms of relief then mouldered on the beaches—used clothes, high heel shoes, expired medicines—because “in-kind” donations are well known not to be effective forms of aid on a global scale. The wrong hair products to survivors of Hurricane Katrina. Old tin cans to local food banks. If you know your neighbors, and you know their needs, you can probably offer material donations that will make a difference. But most of the time, financial donations are the best way to respond to human crises.

And when individuals are empowered to know which donation platforms are the most effective? When they understand that large overhead costs aren’t always a red flag, but can be, and are aware of which aid organizations actually have good working relationships and are wanted in distant regions? All of that can be a boon to the work as well.

But certain movements, such as effective altruism (EA), which was recently spotlighted by Sam Bankman-Fried (CEO of now-bankrupt cryto giant FTX), can take these concepts to a dangerously self-serving extreme. If financial inputs are often the best way for individuals to reach others in need, the argument in some circles goes, then surely the best thing a person can do is amass as much financial capital as possible, and personally optimize its use.

Why wouldn’t you work for petrochemical companies or as a hedge fund manager now, if earning an exorbitant salary could give you more philanthropic power down the line? If you really worry about the environment, why not make crypto millions ASAP, to offset the environmental cost of blockchains later? Don’t you care about making a real difference?

READ: Trust us: Crypto billonaires and effective altruism

Such fallacious thinking takes for granted that there is no better route to improving human outcomes than by playing into the economic game crafted by growth-oriented capitalism. However, it’s an error in judgment to which rational-empiricists are quite susceptible, because it rewards the myth of individual exceptionalism, and flatters the idea that we who have run through all the mental moral arithmetic are of course the right people to be subverting democratic process, securing outsize wealth to guide its redistribution ourselves.

One clear “tell” for self-serving thinking is when the conclusions of initial analysis don’t require many changes to pre-existing behavior. This is easy enough to identify when it comes to religious experience (e.g., when one’s god conveniently holds the same views you do on all social issues), but often more of a challenge in the secular realm. Nevertheless, when people who already have power, or who would have sought outsize power in society anyway, come out of their critical analysis fortified in the belief that it was morally correct for them to hold or seek out power all along, that should give one pause.

With what limits? With what checks and balances? And on what robustly replicable basis for other self-appointed leaders of present and future altruistic chores?

From effective altruism to longtermism

But it gets worse, because discussion of EA also often involves longtermism, a kind of long-term thinking heavily aligned with tech industry magnates like Peter Thiel and Elon Musk. There are many ways to think about our obligations to the future, but one of the most lately influential builds on the work of Nick Bostrom, who in 2002 coined the term “existential risk” while advocating for the idea that humanity has an obligation to fulfill its potential.

What is our “potential”? Well, that’s where more self-flattery comes into play, because for folks like Bostrom, who founded the Future of Humanity Institute (FHI), and institute researcher Toby Ord (The Precipice: Existential Risk and the Future of Humanity [2020]), it lies with cosmic colonization, optimizing our dominance over nature via tech, and generally pursuing the transformation of our species into an elevated form of sentient life. Along with Forethought Foundation director William MacAskill, author of What We Owe the Future (2022), such theorists argue that ethical decision-making requires us to recognize that current suffering is a footnote in the history of the human species. A numbers game, as it were. They argue that, because more unborn humans will one day exist than do here and now, we have a moral obligation to prioritize planning for their needs as well.

(Or in MacAskill’s case especially, because more neurons stand to benefit. His latest work includes a comparison of the value of animal lives by neuron count.)

Prominent figures in this movement skew toward technological threats to this imagined future. Jann Tallinn, who first made his money with Skype, joins Thiel in being fixated on the threat of artificial intelligence destroying the human species. They’ve both invested in the Machine Intelligence Research Institute to prevent that dreaded end. Climate change? Well, that’s more of a short-term blip, in much of the calculus of these theorists: troubling, yes, but not insurmountable in the grand scheme of things, especially if we keep investing in more elaborate tech-utopian interventions. (Along with “apocalypse” estates for the wealthiest in Silicon Valley to ride out the worst environmental outcomes.)

What’s new again is very, very old

Ironically, one of the most famous works of longtermism, Isaac Asimov’s Foundation (1951), was a fix-up novel, a book patched together from stories published throughout the 1940s in Astounding Science-Fiction. In other words, this iconic sci-fi tale, which imagines a theory of psychohistory that allowed one man to calculate the future and mitigate threats to human civilization thousands of years off, was not originally planned as such. It was not overly designed from the outset, so much as a product of story publication processes in a niche mid-century literary culture. And yet, it’s the fictive vision this book contains, of one brilliant man who could steer eons of future history, that lingers in our cultural consciousness.

A more recent SF work, Sue Burke’s Semiosis (2018), reveals a similar error of reasoning that often has us thinking that what is complex and intricate requires careful, top-down monitoring and direction. It’s a multi-generational story of a future human colony, and it landed a place on the Arthur C. Clarke Award shortlist ostensibly for its use of botany and exploration of plant sentience. But for a work teeming with scientific terms, it also makes a surprisingly anti-evolutionary argument, around a plot where colonists have to teach floral and faunal symbiosis to an alien plant species.

This error will be familiar to anyone who recognizes the argument from incredulity from religious debate. Sometimes grasping the sheer complexity of natural systems can lead us to believe that they need a strong guiding hand. (And to an extent, this is completely understandable, because evidence for how complexity arises on its own from simple systems can take time; almost 70 years after Alan Turing’s theory of pattern formation in nature, scientists and mathematicians are still working out all the mechanisms involved.)

Nevertheless, when longtermist thinking relies on the idea that the far-flung future needs our intervention today, we would always do well to check in with our understanding of how the natural world develops complexity in general.

Is our action as urgently required as it might flatter us to believe? Or is the outcome we want to achieve simply something that we’d love to see done, ourselves?

Testing secular longtermism by contrast

Religious versions of longtermism, which have been in operation for centuries if not millennia, offer an excellent point of comparison. Christian colonial expansion, along with more recent waves of eugenicist thought (up to and including religious fights to ban abortion out of a desire to protect the future of “the white race”), was also predicated on the belief that “someone” needed to guide and protect the fate of peoples to come.

Earlier this year, I published a translation of Tomás Carrasquilla’s short stories, which included three turn-of-the-20th-century Colombian versions of Old World Christian folklore. One of those, “The Solitary Soul”, tells the story of a kingdom brought to ruin because a single person sowed doubt in a young noble on the cusp of his marriage. That doubt destroys the family, its dominion, all of it; and then the culprit does penance by walking the Earth until granted a vision of the future his careless act destroyed: a vision of Christianity triumphing from the lineage that would have come from that marriage. Beautiful maidens literally taking up the cross to offset any risk of vanity through routine physical hardship. Artists painting portrait after portrait of the Virgin Mary. All the world converted and (in its commitment to Christian precept) supposedly improved.

It is the laziest form of critical thinking to condemn the superficial gloss of a bad idea, while still clinging to and seeking to benefit from its underlying oppressive structures.

Nevertheless, it’s common in rationalist circles for folks to critique religion, but not the unjust social parameters through which it’s been advanced to date.

Secular longtermists can make this error, too.

Better longtermism requires more than lip service to the world’s multiple moral peaks

Roman Krznaric, founding faculty at The School of Life and author of The Good Ancestor: How To Think Long-Term in a Short-Term World (2020), inadvertently illustrates this abiding secular susceptibility to bad ideas with an analogy he uses to explain longtermism: the “Marshmallow Brain” versus the “Acorn Brain”, for short and (very) longterm thinking.

In some interviews, like a piece last year for The Long Time Academy podcast, Krznaric neglects to mention that the “marshmallow test” he’s using as an imaginary baseline to describe human impatience has been thoroughly debunked. But even when he’s more recently acknowledged that the marshmallow test is a far greater indicator of socioeconomic disparity and its impact on human behavior than any real aptitude for planning, he still ends up over-determining what makes humans really, really special around his proposed alternative, our supposedly distinct “acorn brain”. As he wrote earlier this year:

But don’t other creatures think and plan ahead? Sure, animals such as chimpanzees make plans, like when they strip leaves off a branch to make a tool to poke in a termite hole. But they will never make a dozen of these tools and set them aside for next week.

Yet this is precisely what a human being will do. We are long-term planners extraordinaire. The Acorn Brain enables us to save for our pensions and write song lists for our own funerals.

Putting aside that we have no control over what songs will actually be played when we die, or if our pensions will be stolen by politicians despite all best efforts to plan ahead…

Did you catch the obvious oversight in a metaphor involving acorns?

Gosh, if only there were some species that routinely creates oak trees for future generations by laying in store a whole whack of acorns for hard times.

Yes, squirrels do this without actually thinking about future oak trees, but that too is a key point: many systems build robust futures without overt planning.

(See: Peter Watts’ Blindsight for a fun SF exploration of this idea that our conscious brains aren’t the be-all and end-all of evolutionary progress. Ironically even Bostrom, in a more recent paper where he imagines a future with more top-down government to rigorously police DIY lab tech and defend against death-by-AI disasters, advocated for building “stronger mechanisms for resolving international disputes … [l]ike a squirrel who uses the times of plenty to store up nuts for the winter”: which is another confusing comparison, but this time precisely because the squirrel acts unconsciously and we cannot afford to.)

Glibness aside though, Krznaric’s strained metaphor of marshmallows and acorns has a clear end, as does so much other feel-good, self-congratulatory work in the longtermist subdiscipline. Its purpose is to highlight tool production as what makes us exceptional, and as what should be prioritized in future planning for our species. Our “existential threat” is therefore anything that risks diminishing human intelligence as it manifests in industries related to production, expansion, and dominance over our natural environments.

Let’s contrast this with other approaches to what it means to be human. Longtermist rhetoric in the tech industry often pays lip service to Indigenous ways of being, and lifts just enough multi-generational thinking from tribal discourse to prop up pre-existing ideas about how to optimize those imagined billions of lives ahead.

Meanwhile, in 2018, the White Earth Band of Ojibwe formally recognized the legal rights of Manoomin, a wild rice species key to tribal heritage. In 2021, the band filed a tribal court case to enforce those rights, in what might have been a world first. But for those advocating for this species, Manoomin isn’t just another life form with fewer neurons and thus a lower welfare quotient; it’s family. If the band takes care of Manoomin, Manoomin takes care of the band. Without Manoomin in the past, no band. Without the band in the present, perhaps no Manoomin in the future. All of life, in many animist traditions around the world, is connected.

YouTube video

In South America, too, nature has been gaining stronger legal protections. In 2018, Colombia’s Supreme Court recognized the Amazon River ecosystem as having rights and being a beneficiary of state protection. In Ecuador, the Constitutional Court ruled in 2021 against mining concessions for the Los Cedros Reserve, in keeping with the rights of nature enshrined in the country’s 2008 constitutional amendment.

These ways of viewing humanity differ from the human “potential” advanced especially by technological utopians, who often view the theory of degrowth, and other efforts to adapt to current climate change by changing our sociopolitical priorities, as an unacceptable concession after centuries of growing in access to industrial-era material goods. Building systems where less is needed for humans to feel fully integrated into local democratic action and communities of care is, for many in tech spheres, a net loss for our species. They advocate for “sustainable” growth instead, through the development of more elaborate and low-cost technologies that will surely allow our competitive market economies to keep growing without taking as much of a direct environmental toll.

In other words: qualitative approaches not only to our future but also to what it means to be human don’t align well with longtermist projects that rely on metrics like average species neuron count, the number of planets we’ve seeded, and what new tools we’re dreaming up, to determine whether or not humanity is living up to its “potential”.

How then do we bridge the gap?

Is our action as urgently required as it might flatter us to believe? Or is the outcome we want to achieve simply something that we’d love to see done, ourselves?

Leaning into a longtermism that works

Again, the desire to avoid wasting resources and to help others in need is a good one.

The problems begin when we presume that what imaginary future people, along with people elsewhere in the world today, really need is someone else to make decisions for them.

Other forms of long-term thinking—the kinds sadly overwhelmed by tech billionaire dreaming as of late—have more built-in respect for self-determination. In practice, this means helping the future by doing everything we can to uplift people in the present. It means recognizing that every generational community needs the agency to decide for itself what makes life worthwhile and rewarding. And it means remembering the false teleology behind this notion of our “potential”—as if anyone’s waiting at the finish line of the human species to hand out medals for best in show.

We are fleeting witnesses to a probably endless and indifferent cosmos.

Our inheritors—human, resilient cockroach after nuclear fallout, supra-sentient fungal colony that will one day ruminate about longtermism on forums all its own—will be, too.

The best gift we can leave for the future is a good example. This, we do by being willing to make choices for societal transformation that require personal change as well, and by distrusting any theory of action that conveniently affirms pre-existing preferences. We also need a more robust habit of promoting agency above all else, even and especially when agency leads fellow humans to make different choices, and pursue different moral peaks.

No, not everyone will choose to prioritize the stars, cool new tech, and the development of a suspiciously eugenicist “next gen” species, as their metrics for superior human outcomes.

And yet, that maddening proclivity for dissent about the core of who we are is precisely how we know that what makes us truly human can—and will—go on.

Avatar photo

GLOBAL HUMANIST SHOPTALK M L Clark is a Canadian writer by birth, now based in Medellín, Colombia, who publishes speculative fiction and humanist essays with a focus on imagining a more just world.