A not-so-new breaking science story about neurons and Pong still has plenty to teach us about the state of science literacy, and what we need to do to make it better.
Everything old is new again, in the world of mainstream reporting on scientific progress. That’s why you can be forgiven for déjà vu if you read this week about cells trained to interact in an environment mimicking the video game Pong. Wait a second, you might have told yourself: Didn’t we do this already? And you’d have been right: New Scientist reported on Cortical Labs research by Brett J. Kagan et al last December. So why is it making the rounds again, and as if this were a freshly groundbreaking discovery?
Well, on the surface, because on October 12 Neuron published a research paper, “In vitro neurons learn and exhibit sentience when embodied in a simulated game-world”, that summarizes the team’s work to date. But more importantly, because our news network isn’t very good at treating science releases differently from other press reports. A PR statement goes out, an Associated Press piece goes live, a science article gets posted to a public-access website, and it’s all the same in news channels. The internet lights up a little like a series of neuron clusters on their own game board, with their own feedback loops to accelerate “learning”. A hundred news outlets rush to get their version live in time to earn those clicks.
But does this kind of journalism always serve us well when sharing science updates? Or is damage done to scientific literacy when we re-report discoveries as new?
The broader issue with media literacy
Up-to-the-minute reporting has its pros and cons for every topic. We need speedy intel for certain events, but not all. Not simply to fill air time. To approach journalism otherwise, from the view that everything that’s new should be delivered with the same intensity (and that everything intense should delivered as if it’s new), is a well-known part of the “CNN Effect”. Ironically, though, this media strategy eventually backfired even for CNN, which lost viewers to pundit-forward venues that were spending less on investigative journalism, but offering a more entertaining range of authority claims.
But even (or especially) for urgent items, without offering the right context, we can still do immense harm with breaking coverage. There’s a reason people wrongly believe there’s more violent crime than statistics support: according to George Gerbner’s “Mean World Syndrome” theory, they’re seeing more violent crime, in aggregate, from a wide range of local reports. For a species that only very recently started receiving real-time reports from everywhere all at once, this is not something we’re biologically adapted to parse with ease. A threat is a threat is a threat.
Journalists are also acting under threat, though, because most of the best homes for strong, evidenced-based reporting are being underfunded if not outright gutted. Inaccurate or incomplete news items often emerge because reporters are enduring resource crunches that foster an over-reliance on readily available statements from parties with vested interests in a given “side” (e.g., police reports, party statements, and government office missives). Time and money simply aren’t on their side.
All of this requires redress. We sorely need to shift from a baseline secular news sphere where the thinnest veneer of ideological neutrality suffices, to one with an express interest in living up to the standards set by our knowledge that we have only one natural life in which to “get it right”. Standards such as:
- being open about our current knowledge gaps,
- enthusiastically correcting for error (our own as much as anyone else’s),
- accounting for the impact of false urgency while not shying from making direct arguments for an issue’s importance, and
- being proactive about follow-ups especially when they yield facts that contradict or transform initial findings.
Or, simply put: writing with humility and curiosity along with passion for the world.
But there’s an extra challenge when breaking news about scientific discoveries. And that has to do with the fact that real science news is always long term (and long form). At least, when done in adherence to the method on which it stands. When news reports fail to reflect scientific process, they advance scientific illiteracy instead.
Neurons and Pong
So what exactly happened last year in Cortical Labs, and what does this week’s latest published research by their team refine about our knowledge from December?
Neural network studies encompass a range of disciplines, generally converging around neuroscience research for medicinal and physiological purposes, and computer studies with a focus on cognitive science and AI. Since the 1950s, trainable programs like Perceptron have been used to emulate biological processes, with the hope of improving artificial computing in turn. But only in the last 40 years have we made sufficient strides with organoids, cell clusters that self-organize and differentiate into various cell types that match those found in natural bodies. Only recently have we been able to significantly “reprogram” them, too.
At the heart of this latest research is the free-energy principle. This is a highly embodied view of consciousness, and it holds that since all organisms have finite physical resources, it is to their benefit to minimize the energy expenditures required to move through their environments. But how? This is where perception comes in: predictive modelling based on past experience, to avoid high-stress situations, and to lean into more restful states. Organisms able to act on accurate predictions can avoid being “surprised” into wasting energy on new sensory engagements. They can move more efficiently through their worlds, and stand a better chance of surviving long enough to pass on the perceptual tool kit that helped them achieve this end.
The team at Cortical Labs needed this principle for their experiments. It was not enough to grow neural networks from rodent and human stem cells on silicon chips that could send and receive electronic signals. When left to their own devices, the cell networks (called “DishBrain”) had little incentive to engage with the signals that researchers were sending them, and which were mapped both to grids on the silicon chips, and to a connected computer display. These signals created coherent rules for a game in which a single “paddle” could be used to keep a “ball” in motion.
But when researchers sent a chaos of noise into these cell cultures whenever they didn’t perform the way they needed to, for the paddle to block the ball on researchers’ screens? And when researchers sent easier patterns to predict whenever the cell clusters reacted correctly? These closed-loop feedback systems exposed them to causality: a concept that requires recognition of an external state that can affect the internal (perception), and how internal action can affect the external (action). The neurons’ actions after exposure to causal relationships suggests the development of predictive modelling: “Do X when Y happens, else NOISE.”
What Kagan’s team at Cortical Labs reported last December, as a body of findings then formally published this October, is that these neuronal cell clusters learned much more quickly how to “play” the game when exposed to chaotic signals after incorrect reactions to in-game stimuli. These neural networks, being inclined by Karl Friston’s free energy principle toward whatever activity reduces wasted energy on action, soon demonstrated speedier perceptions of the correct response to the movement of the “ball” (signal) through their environment. By successfully extending game play, they put off the arrival of much less taxing signal patterns.
And no, that’s not as exciting as saying “We taught brain cells to play Pong!”
But it does give a far better understanding of the importance and familiarity of this research. Which is what we sorely need more frontline science reporting to do.
The promise of machine learning, the hype of AI
Kagan told Nature what he feels is the value of his team’s research:
“In current textbooks, neurons are thought of predominantly in terms of their implications for human or animal biology. They’re not thought about as an information processor, but a neuron is this amazing system that can process information in real time with very low power consumption.”
And if you’re an everyday reader, that might seem pretty nifty. Revolutionary, even!
But if you have any familiarity with related disciplines, you’ll know that this is a stretch that oversells the accomplishment. As noted above, discussion about neural networks, biological and artificial, goes back to at least the mid-20th century.
In the 1980s, there was also deep debate about the extent to which the mind was a computer. Philip N. Johnson-Laird’s 1983 Mental Models: Towards a Cognitive Science of Language, Inference, and Consciousness is a strong precursor to Friston’s free-energy principle, in describing mental perception as a series of resource-limited models that can be digitally represented. John R. Searle, writing on consciousness and intentionality over the decade, developed the “Connection Principle” to argue that, much like any other computing system, our mental representations of rules do not cause patterns of behavior. Rather, “there is a neurophysiological cause of a pattern … and the pattern plays a functional role in the life of the organism.”
And was this happening in some dusty corner of the discipline? No. Roger Penrose’s The Emperor’s New Mind (1989) was a pop-sci bestseller that brought all of this wrestling with the mind as a computer to the fore of public discourse.
This history matters, because if it were better understood, repeat clickbait wouldn’t have the same cache. Even better, there’d be more room to discuss response from the rest of the science community to any sensational new scientific advance.
Steve M. Potter, for instance, is an neuroengineer at Georgia Tech who demonstrated goal-oriented behaviors in rodent-derived neurons (a clear antecedent to this work). He told NPR that what constituted cellular learning in this field remained “quite rudimentary”, but his observations were buried at the end of the piece. Nature also held its counterpoint until the end. There, neuroscientist Takuya Isomura, of the RIKEN Centre for Brain Science, points out that the paper’s findings don’t yet offer enough robust evidence of these neuron cells performing goal-oriented behavior.
And sure enough, the paper itself agrees with this limitation. In their conclusions, the authors note that, although their research revealed a strong difference between how the control-group DishBrains responded to game signals versus the ones given two forms of “feedback” depending on response to initial stimuli,
simply minimizing entropy (i.e., average surprise) may offer an overly simplified account of [the cell network’s] adaptive behavior: a key aspect of active inference is the selection of actions that minimize the surprise or free energy expected on following that action. While these results are interesting and supportive, they are not conclusive, and future work is required, including exploring BNN behavior with a generative model.
And that work may well come! But this is a key difference between news reporting for everyday readers, to build excitement for science in process, and how discovery is discussed by experts. Within these fields, a paper’s publication is certainly an important step—but one of many, on a long road of replication and falsification tests, amid the ongoing exclusion of other variables through experimental refinement.
Meanwhile, for the rest of us, other considerations are important to keep in mind.
Funding models and the scientific method
Cortical Labs is a privately held, venture-capital-backed company in the field of neural-platform-based bio-computing. It’s staffed by a small, passionate team, and is still in the early phases of its market growth. As such, it’s still relying on seed money to develop future products that will offer significant returns on initial investment.
Now, even if not ideal, there’s nothing automatically “wrong” about science being advanced by private investors like Blackbird. However, this context should still change the operating premises of journalistic interaction with any related discoveries. Even if peopled by a great, driven staff eager to change the world for the better, private companies still have different pressure points, and will benefit differently from the widespread dissemination of even their most tentative published results.
This isn’t the first time, though, that we’ve seen complex relationships play out between media promotion of private company advances, and scientific discovery for broader public benefit. Whether it’s another tech-future promo tour for Saudi Arabia’s latest megacity project, or a private company using any PR opportunity to fundraise for devolution projects involving mammoths and Tasmanian tigers, science and tech are routinely woven into private enterprises that rely on overselling early achievements to keep future investment potential high.
And yes, these companies have to promise exciting deliverables, if they’re going to be around long enough to maybe paradigm-shift our world for real.
But do news outlets owe them free advertising? Especially when said advertising involves downplaying scientific process and upselling preliminary results?
Or how about when the goofy treatment of a story like this one, which exaggerates the extent of neural “learning” and plays up the video game angle for sensation, doesn’t prepare average citizens for the deeper issues that such research raises?
Because the fact is that these cells did not “play” anything. They didn’t toke up, kick back, and josh with their neuron-friends online. They have no idea what a paddle or a ball looks like on researchers’ computer screens.
These cell clusters received one of two types of “feedback”, depending on how they responded to signals moving through their environment. And we still don’t know the extent to which they were acting on predictive models to avoid the worse one.
But wouldn’t you know it? The Cortical Labs team actually shared a paper about just that this year too. “Neurons Embodied in a Virtual World: Evidence for Organoid Ethics?” was published in March 2022 by AJOB Neuroscience. Alas, more nuanced scientific discourse rarely makes for good news. So instead we got a rehash of December’s more sensational headline for a news cycle. And maybe, in a few months, with another academic publication or press release, we’ll see similar again.
How quickly cell clusters can remember—and forget.
The news as a neural network
Science journalism for everyday readers needs better closed-loop feedback. Why? Because individuals and their outlets will always misfire from time to time—I certainly do—but the aim isn’t singular perfection: it’s collective learning. It’s systemic growth.
Only, what would be the right “noise” to incentivize better choices? And where would it come from? Within what ethical parameters?
Or perhaps we have enough noise already, and simply lack a better pattern to grow toward instead. Maybe what we need isn’t just better examples of communal self-correction, curiosity, and scientific literacy in mainstream media—but also, more evidence of our socioeconomic and political systems supporting the same.
To that end, if anyone has an experimental model in mind, now might be a good time to try it out. As more federally funded research papers are released directly to the public under changing US regulations, we’re only going to see more journalists trawling science pages for the latest story to spin mainly for clicks—which is great, on one level. More research news! Spread it far and wide!
But it would be even better if we didn’t forget to mind the background noise—the ever-important “who” behind the “what”—and quit misrepresenting both the science and its processes so often along the way.