Overview:

Yet again, new software has folks in highly specialized fields panicking about losing their livelihoods to machines. But the real culprit, corporate monopolies, needs our criticism more.

Reading Time: 7 minutes

It’s been 16 years since a student at Dalhousie University called for Canadians to stop using Turnitin, an online plagiarism detector, because student data could be subject to the USA Patriot Act. This was one a few major criticisms of the for-profit software when it first came to dominate school settings early in the 21st century. Another was that the mandatory use of this service compelled students, under a presumption of guilt that violated many schools’ codes of ethics, to give up personal information to a third party, and to otherwise compromise personal copyright to improve an algorithm for a business that preyed on academic anxieties about cheating.

But for all these ethical qualms, educators generally accepted the change in technological status quo—in part, because many new administrative policies gave teachers little choice; in part, because the software improved workflow, a key consideration in fields like post-secondary education where up to two thirds of teaching staff were economically precarious.

Whether Turnitin even reduced plagiarism is debatable, because all it does is look for matches between student writing and writing elsewhere on the internet. There are many forms of plagiarism that the software cannot easily detect, and various font and formatting bugs to which it is susceptible to oversight. Meanwhile, teachers may rely too much on the software’s verdict to catch otherwise obvious cases themselves. (We also have no idea if the code itself contains plagiarism, because it’s not open-source.)

AI panic, selectively

I’ve been trying to make sense of my irritation with recent panic over machine-learning software that might “take our jobs”. A few weeks ago, Ammaar Reshi boasted on Twitter of having used OpenAI ChatGPT and Midjourney to write and illustrate a children’s book over the weekend. Around the same time, Tor Books landed in hot water when it turned out that the cover of the latest Christopher Paolini novel, though taken from a site that had ostensibly banned AI-generated art, was indeed based on an algorithmic creation.

In both cases, a great many writers and artists fiercely condemned the creators of these software-generated products, and had plenty of choice words for the impoverished quality of the algorithm-generated images themselves.

(Not all, mind you. Some writers and artists are enthusiasts of algorithmic production processes, and argued that the cover in particular did its job well.)

Likewise, some academics are now worried that ChatGPT, which can produce whole paragraphs of coherent argumentation in response to user queries, will usher in a new era of plagiarism that Turnitin won’t be able to catch, making educators’ lives harder. The AI generation of made-up references, in particular, will require far more skill to identify. Is this the end of the student essay? How ever will educators assess class learning now?

But the real issue reveals itself in the rhetoric of alarm over machine learning. In my other publishing field (SFF) the cry has been simple: AI will take over! Movies and books will be made entirely from algorithms! Writers and artists will lose their livelihoods!

In this claim, though, lies a critical misdirection of attention and concern. How, exactly, would AI take over in this scenario? Are we talking SkyNet? The machines singlehandedly driving a major shift in market action?

No, of course not. The fear is that corporations will leverage cost-cutting software to their further benefit. That corporations will decide they no longer wish to partner with traditional writers and artists now that these advanced algorithms have come along. Job loss to cost-cutting is a legitimate concern, but one that far predates all these new technological applications, and which requires a very different sort of redress to combat.

But also—and more to the cause of my irritation—because the other half of this implicit argument runs: “Audiences are dull-witted reacto-bots following whatever’s shiniest, so of course they’ll flock to nothing but AI-generated books and cinema, leaving real creators in the dust!” Which speaks a lot more to how many fellow creators view their audiences, in an economy where presses and movie studios already play it safe by only buying work that feels similar to past sure bets, and where common wisdom holds that you need a “hook” to grab people in the opening line, or else those flighty dullards won’t read on.

READ: Is AI art causing future shock, or age-old economic anxiety?

It’s this insulting approach to actual readers and viewers (who absolutely form intimate bonds with human creators, and will continue to do so), and the complete gloss over corporate monopolies as the actual ongoing threat to human thriving, that was grinding my gears in the backdrop to all this excessive future shock. I was especially grumpy to see it happening among fellow speculative writers: people, that is, whom I had made the mistake of thinking should know better, by virtue of our dreaming up worlds that routinely imagine behavioral transformations in lockstep with new tech.

And yet, of course they didn’t: because speculative writers are no better than anyone else at coping with change when it imperils personal livelihood. Like most human beings, including the educators and administrators who by and large hand-waved ethical issues with Turnitin two decades ago because of its immediate benefit to them in a precarious teaching economy, creators are also very good at ignoring a problem until it negatively impacts themselves.

Are we truly so afraid that our output could be replaced with algorithms, that we’d rather fight specific algorithms than the toxic idea that “output” is what gives human life value?

Grappling with corporations

Aimee Ogden’s “Intentionalities” depicts this mentality well. As the opening lines to this splendid short story read:

Sorrel never intended to confer a child to the Braxos Corporation. But Sorrel had never intended a lot of things that managed to happen with or without her say-so.

What follows is an explanation of how debt pressures and a perceived lack of other escape mechanisms in a corporation-driven world leave Sorrel feeling like she has no other choice but to bear a child that will be handed over at the age of five, to do perilous labor in space with a promise of freedom and stability after its term of service. Only then—after she’s conferred her child, a person she’s met and known and loved, to this heinous indentured servitude—does she realize how unacceptable the whole situation always was. Only then does she do what she could have done from the outset: protest the system, and resist it.

But up until the point when the loss hit her personally, she could rationalize her actions in an unjust system. She could imagine that she was just doing the best she could to survive in a corrupt society, playing by its rules as best she could.

I’m no better—which is also why this latest future shock among my fellow creators irritated me: personal shame. It reminds me how long I, too, tried my best to make it in unjust systems, and reacted to their pressure points defensively, hoping that if I played by broken and unethical rule sets long and well enough, I could eventually gain the stability and power (in academia especially, and publishing) to do better myself. Ha.

Ogden had another story at the end of the same year, which was far more widely read and acclaimed. Why did it do better? Because her version of Tom Godwin’s “The Cold Equations” (1954), a story called “The Cold Calculations” that glosses over key mitigating details in the history of one of SFF’s most contentious tales to affect a form of righteous resistance to old oppression, played perfectly into a form of cozy progressivism that delights in the easy win of condemning the past for not being as enlightened as the present.

Far more difficult, as “Intentionalities” beautifully outlined (and predictably, to far less social acclaim, because it is the more challenging read), is recognizing our own complicity in broken systems, and reckoning with the hard fact that many of us won’t take up the fight against them until they negatively impact us, personally. Until that point, we might even be on the fore of all sorts of technologies that do harm to others—so long as they serve us well.

Complicities abounding

So it is that we’ve had a glut of machine-learning ethical dilemmas in the past two decades, long before the current panic. For one, there’s the use of this tech by state authorities to accumulate data on citizens in public spaces. Yes, we know it’s happening, in the US as in China, and yes, we know that this data also reinforces existing societal biases. But we still unwittingly play into the technology’s refinement through online interactions like CAPTCHA tests: necessary, we rationalize to ourselves, if we want to retain access to sites that educate, entertain, and connect us to one another.

Then there’s the development of self-driving vehicles that expand surveillance, with truck drivers especially bearing the brunt of tech that cannot fully replace them yet, but which companies are nevertheless using to further exploit the workforce. Why would we show up for this crisis, though, when we’ve done precious little to support such workers in the past?

And of course, there’s the spread of deep-fakes shaping news discourse, infiltrating online comment forums, and worsening sexual harassment. (Plus, let’s not even get started on the disturbing state of children’s YouTube.)

Have we grappled with these issues individually? Of course. We’re living in an era where figuring out what is “real” and what isn’t won’t get any easier, and we have a lot of work to do to reckon with what this will mean for dominant systems of exchange.

But the outcry we’re seeing now toward software that might encroach on educator, writer, and artist livelihoods lacks holistic critique, and often leads panicked individuals to default to defending an already terrible status quo.

What will happen to the essay, now that it’s been “hacked” by AI? Well, maybe educators will need to go back to asking students to write one on the spot, by hand. Or maybe other ways of demonstrating knowledge, better informed by hands-on application or Indigenous and peer-group alternatives to top-down instruction, will replace teaching-to-the-test at all.

“But but but—there isn’t time for all that! We don’t have the resources!”

Absolutely true. Just as it’s true that at the core of writer and artist panic over machine learning is a fear of losing our livelihoods in a culture that treats human value as contingent on economic output. What if we built a world instead where everyone was beyond subsistence-level striving, and could pursue creative output if they so chose?

Yes, we probably wouldn’t have as many celebrity artists and writers. The horror! But we also wouldn’t have outsize want and need. Is that really so difficult a choice for us to make? Are we truly so afraid that our output could be replaced with algorithms, that we’d rather fight specific algorithms than the toxic idea that “output” is what gives human life value?

Technological shift should empower us to tackle pre-existing corruption in our status quo. It’s not the fault of machine learning that most human output is ultimately replicable (or will be, one day), and we cannot un-shatter the teacup that’s been dropped all over contemporary society, with respect to the uses and abuses of current algorithms.

What we can do—and what some of the best SF, like Ogden’s “Intentionalities”, calls for us to do—is honor the fact that, whatever position we’ve taken before, whatever complicities we’ve allowed ourselves to believe were previously beyond our control to overcome, we are nevertheless now and forever in a fight for our lives with dehumanizing systems.

So if we want to protect our species’ future? We need to wrest control of it back from the true automatizing threat in our societies—and that’s corporate monopolies, not AI.

GLOBAL HUMANIST SHOPTALK M L Clark is a Canadian writer by birth, now based in Medellín, Colombia, who publishes speculative fiction and humanist essays with a focus on imagining a more just world.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments