Amid all our latest anxieties about AI taking over professional tasks, do we finally have time to reflect on what makes being human truly count?
It’s been a good few months for machine-learning software—and a confusing one for the humans trying to live their best lives around it. Last fall, digital art generators like Midjourney and DALL-E drew the alarm of illustrators, photographers, and other digital artists: not just because the algorithms were trained on data not expressly sanctioned by human creators for such uses, but because the results were being monetized. Because human artists started losing jobs the moment employers saw a lower-cost solution.
(Not that we’ve always been able to tell the difference between output: A recent Reddit scandal involved a human artist banned from an art-sharing forum, and told to use a style that didn’t look so much like an AI rendering in the future.)
Similarly, OpenAI caught the world’s attention for its ChatGPT program, which produces fairly coherent responses to user queries. Early output launched panic around the end of the university essay as an assessment metric, its impact on other college-level courses, and the future of other writing and coding jobs: an issue that also came to the fore in 2020, when Microsoft let go of dozens of contractors in favor of AI programs guiding news story selection and curation. A GPT-3 program dedicated to answering legal questions also stepped into the public eye last fall, followed early this year by news of an “AI-powered ‘robot'” serving an actual human defendant in a traffic ticket court case.
Notably, there’s far less outcry over the clear benefits of machine-learning for other fields. AI has made it easier to solve open problems in mathematics, predict the structure of almost all known proteins (critical for the development of new drugs), manage overcrowded ERs through more efficient patient triage, and improve screenings for cancer, Alzheimer’s and Parkinson’s, and other major diseases. Still, medical students are also concerned they might lose their jobs to the rise of more such tools, while some use cases, like the role of AI in mental health crises, is complicated. The use of such algorithms to detect suicide risk from online behaviour is one thing, but when Robert Morris recently boasted about switching out human operators with GPT-3 on a mental health chat platform, he received significant backlash over the ethical issues of experimenting with people in crisis.
So what’s going on? What’s really causing this erratic range of responses to new tech?
The value of being human
But there’s another deeply human factor:
As much as we might long for post-scarcity cultures where everyday needs are fully met, the road to such an end involves giving up a way of thinking about human value that has become essential in post-industrial societies. The careerist way. The way of an idealized middle-class status, a lifestyle “achieved” by cultivating and being economically rewarded for a specific set of skills prized by society. Who am I? I am a banker, a lawyer, a teacher, a plumber, a baker, a doctor, a civil engineer, a programmer, a sanitation specialist, a factory worker, a daycare provider, a poet, a soldier.
When any of these jobs becomes replicable and/or replaceable by new technology, the sense of a personal attack on livelihood runs deep: at least, in a culture that binds one’s job to one’s value as a human being.
But do these two concepts need to be so inextricably linked?
In theory, we know utopia cannot be built on careerism. Although it is a fine thing to specialize, if only for the intrinsic satisfaction of pursuing a given skill, the idea of one’s economic security being tied to a set career path always introduces the need for market scarcity. It’s just that moving past this deeply entrenched way of thinking is difficult.
How has our species solved the challenge before? Often, with the addition of omnipotent beings that stand ready to love us all equally, irrespective of our worldly labors, and with the idea of an afterlife where non-economic metrics will determine one’s ultimate rest.
But there have been other paths, too. Some religions advocate for a complete detachment from the material world: living in it, without coveting objects and relationships in it. (And some go full circle in their obsession with ascetic living, such that maintenance of this lifestyle becomes its own highly coveted good.) In more collaborative cultures, communities gather and roles change to suit the needs of the moment. A barn raising doesn’t need everyone to be a carpenter by trade: just that everyone’s willing to take direction from the people with the most knowledge about how to get the job done.
A humanist approach, wherever it arises, necessarily treats human life as having intrinsic value simply because all life bears unique witness to the cosmos while it exists. The stars cannot feel, so all their added billions of years simply are: value-neutral, indifferent. Only we, in our fleeting time alive, create meaning all around us.
What a shame, then, how often we’ve chosen to create meaning around monetizing and objectifying one another. Around treating fellow human beings not entirely unlike the machines many now fear will take over whole industries. Are we finally ready for a change?
Finding human value in new meaning creation
AI is not going away. Machine-learning will continue to become a more effective and integrated tool in a wide range of human labors. And all our attempts to draw hard lines in the sand about what humans alone can do will continue to meet with fuzzy borders.
None of this is new. Europeans (among other tribes of humanity, over time) were once adamant about delineating between their lineages and those of people from “lesser races”, or lower classes within their own societies. That was the only way they knew how to define personal value: in contrast to an asserted, inferior other. Similarly, most human civilizations once drew a hard line between human beings and other animals: the former feeling, imbued with souls, and able to feel pain; the latter mere mechanical beasts upon whom all sorts of torturous actions could be enacted with a clean conscience.
Alternatives exist, but the path to those ends is fraught with major impediments. Granted, some of those are budging. On January 5, the US Federal Trade Commission proposed a long-overdue measure: a complete ban on non-compete clauses in employee contracts. Non-compete clauses, which restrict a human’s ability to take their skills to a competitor, or to create their own business with said knowledge and experience, are not universal. Where they exist, they reduce a person’s ability to bargain for better salary and working conditions, and otherwise to live with greater autonomy from a given employer. The FTC estimates an overall boost to wages of some $300 billion simply by allowing employees more agency in the market economy.
Moves toward greater worker empowerment are an important step to reclaiming our value as individuals, separate from the work that we’re often frantic to believe that we and only we can do. So too is the creation of a more robust collective safety net, in a culture that can continue to achieve high product yields with reduced human activity thanks to tech assists and refined industry processes. Whether that safety net comes through universal basic income, or some other scheme of public funding that allows humans to make choices for themselves and their communities from a place other than subsistence-level job panic, we can—and should—now be taking the time to hash out related questions.
Instead, by and large, the US is dealing with escalating laws against homeless people (and not, as it should be, against involuntary homelessness due to socioeconomic precarity). It’s seeing nonsense phrases like “quiet quitting” widely disseminated by credulous news media to shame employees for not doing more than they’re contracted for by employers and corporate monopolies that profit-hoard and still perpetuate mass layoffs in the end. Meanwhile, in the UK? Rumors of a general strike in 2023 have been mounting for good reason: the country has seen little in its revolving door of national leadership to inspire confidence in the state seeking workers’ best interests.
Machine-learning software currently makes for an easy scapegoat: blame the tech, not its users. Not the stark rich-poor divides growing everywhere, and which favor those already of means in politics (dare we even call it “democracy”?) as in market participation. But when we blame new technologies, we’re suggesting that “if only” they didn’t exist, “if only” they could be banned, everything else would be fine. Not only is this a futile battle (the tech is here; the tech will always be here), it also severely misrepresents the cause of our current social crises, and leaves us ill-prepared to combat the problem at its source.
How we choose to define the worth of being human, as the core operating principle underpinning all public policies we advance, needs to change.
There’s lots of room to discuss the form of that change.
The shape of the policies that will best serve a more empowered set of human beings.
But these conversations must take place now, if we’re ever to reckon maturely with what it means to live in a “robot”-haunted world.