By Inside_my_head.jpg: Andrew Mason from London, UK derivative work: -- Jtneill - Talk (Inside_my_head.jpg) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons
Reading Time: 8 minutes By Inside_my_head.jpg: Andrew Mason from London, UK derivative work: -- Jtneill - Talk (Inside_my_head.jpg) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons
Reading Time: 8 minutes

I have got into the Netflix series The Good Placethanks to fellow ATPer Jeremiah Traeger. It’s well worth a watch for its philosophical content, dealing with it with a light-hearted touch.

As a result, I am going to use it as a stimulus for a number of posts. You don’t need to have watched the show at all. I wrote the first post on swearing.

To set the scene, Jeremiah described the show as follows:

I’ve recently been watching The Good Place, an NBC comedy starring Kristen Bell. It’s a whimsical moral philosophy-soaked comedy about a woman who ends up in the “good afterlife”, even though she knows that she doesn’t deserve to be there. This universe’s afterlife is decidedly non-Christian (one character says that most religions got around 5% of the afterlife right), but there is still a somewhat “damnation” and “salvation” based system where good people go to The Good Place and bad people end up in The Bad Place (a place of fire and torture) when they die.

In the show, there is a character called Janet who is like the ultimate Google. She essentially knows everything there is to know, has human form, but isn’t a human. She appears to be a repository for all information (acting as a personal database to anyone there) as well as being able to create any item at will.

The show’s wiki states:

She is always courteous and non-judgemental by design. It is impossible to insult her as she cannot feel sad. The only exceptions to these are when she is asked to alter her personality or when she is murdered/reset. If Janet’s killswitch is approached she will begin begging for her life, only to remind the whoever approached that she is neither human nor capable of actually feeling anything.

In philosophical terms, she appears to be the embodiment of Mary’s Room. For the uninitiated, Mary’s Room is a famous thought experiment:

The knowledge argument (also known as Mary’s room or Mary the super-scientist) is a philosophical thought experiment proposed by Frank Jackson in his article “Epiphenomenal Qualia” (1982) and extended in “What Mary Didn’t Know” (1986). The experiment is intended to argue against physicalism—the view that the universe, including all that is mental, is entirely physical. The debate that emerged following its publication became the subject of an edited volume—There’s Something About Mary (2004)—which includes replies from such philosophers as Daniel DennettDavid Lewis, and Paul Churchland….

The thought experiment was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like “red”, “blue”, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence “The sky is blue”. […] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?[4]

In other words, Jackson’s Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new? Jackson claims that she does.

There is disagreement about how to summarize the premises and conclusion of the argument Jackson makes in this thought experiment. Paul Churchland did so as follows:

  1. Mary knows everything there is to know about brain states and their properties.
  2. It is not the case that Mary knows everything there is to know about sensations and their properties.
  3. Therefore, sensations and their properties are not the same (≠) as the brain states and their properties.[5]

However, Jackson objects that Churchland’s formulation is not his intended argument. He especially objects to the first premise of Churchland’s formulation: “The whole thrust of the knowledge argument is that Mary (before her release) does not know everything there is to know about brain states and their properties, because she does not know about certain qualia associated with them. What is complete, according to the argument, is her knowledge of matters physical.” He suggests his preferred interpretation:

  1. Mary (before her release) knows everything physical there is to know about other people.
  2. Mary (before her release) does not know everything there is to know about other people (because she learns something about them on her release).
  3. Therefore, there are truths about other people (and herself) which escape the physicalist story.[6]

Most authors who discuss the knowledge argument cite the case of Mary, but Frank Jackson used a further example in his seminal article: the case of a person, Fred, who sees a color unknown to normal human perceivers.

Whether Mary learns something new upon experiencing color has two major implications: the existence of qualia and the knowledge argument against physicalism.

But  it doesn’t stop there, because Janet also seems to be the embodiment of the Chinese Room thought experiment:

The Chinese room argument holds that a program cannot give a computer a “mind“, “understanding” or “consciousness“,[a] regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, “Minds, Brains, and Programs”, published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since.[1] The centerpiece of the argument is a thought experiment known as the Chinese room.[2]

The argument is directed against the philosophical positions of functionalism and computationalism,[3] which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument refutes a position Searle calls Strong AI:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.[b]

Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.[4] The argument applies only to digital computers running programs and does not apply to machines in general.[5]

Searle’s thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position “strong AI” and the latter “weak AI”.[d]

Searle then supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient paper, pencils, erasers, and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program’s instructions, and produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well, simply by running the program manually.

Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment. Each simply follows a program, step-by-step, producing a behavior which is then interpreted as demonstrating intelligent conversation. However, Searle would not be able to understand the conversation. (“I don’t speak a word of Chinese,”[9] he points out.) Therefore, he argues, it follows that the computer would not be able to understand the conversation either.

Searle argues that, without “understanding” (or “intentionality“), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in anything like the normal sense of the word. Therefore, he concludes that “strong AI” is false.

That’s alotta wiki. The point is that Janet, in this quite light-hearted comedy, reflects some much deeper philosophising.

Does she understand things in the way we do, or think we do? Or is she just a giant Google server?

She interacts with the people around her, and with each resetting (when she is killed for one reason or another), she resets a little more in touch with human qualities. This may dilute the thought experiment or send it off tangentially, but the ideas of the Chinese Room and Mary’s Room are important, and are pretty impactful on people with naturalistic and physicalist worldviews. Would such a “robot” (I use this term loosely, because she is not) have feelings? If she “knew” everything and was merely giving outputs when provided with all necessary inputs, does she really know everything, and does she “feel” anything?

Part of the answer might be wrapped up in her biology, for want of a better word. We often think that our minds are everything – they are what makes us “us”. But our minds are part of a very complex physiological system that includes our bodies – endocrine and digestive systems and all. These feedback loops are what give us our feelings. That said, in Descartes Evil Daemon/The Matrix terms, we might not need those systems if the right inputs could be fed directly into our brain systems. You can imagine the brain in vat/Matrix scenarios where memories and feelings are directly fed into our brains and minds like an IV drip to a vein.

To “know” something, like how to play the best tennis shot in a given situation, might arguably require us, in pragmatic reality, to have a body that is capable of doing and feeling such through actual practice. Could we conceivably get that knowledge just by learning things abstractly?

Well, it depends what you mean by “learning” and “abstractly”. We have been able to give certain organisms “fake” memories that have informed their behaviours. Fruit flies have traditionally been fair game for this, but in 2015, research of this being done to mice came to the fore:

Manipulating memories by tinkering with brain cells is becoming routine in neuroscience labs. Last year, one team of researchers used a technique called optogenetics to label the cells encoding fearful memories in the mouse brain and to switch the memories on and off, and another used it to identify the cells encoding positive and negative emotional memories, so that they could convert positive memories into negative ones, and vice versa.

The new work, published today in the journal Nature Neuroscience, shows for the first time that artificial memories can be implanted into the brains of sleeping animals. It also provides more details about how populations of nerve cells encode spatial memories, and about the important role that sleep plays in making such memories stronger.

We’re getting, inch by inch, closer to Descartes Evil Daemon. In the same way, we are arguably getting closer to a form of Mary’s Room or the Chinese Room.

Does this come any closer to solving issues of qualia, and whether a naturalistic or physicalist, even, framework of the world can adequately explain feeling?

I personally see no issue with emergent properties of consciousness and feeling coming out of incredibly complex brain states. Arranged in different ways, we get different feelings. Given drugs (physical matter acting on physical brains), I have warped feelings, and people with particular arrangements of brain matter get pain asymbolia and synaesthesia. Such conditions, like having pain without “feeling” pain, and sensing other qualities in numbers, colours and shapes, speak of these feelings being tied up with and dependent upon physical matter – brain matter and brain states.

Can an entity without a brain feel? Have consciousness?

I’m not sure. On the one hand, yes, as long as all of those inputs and variables and things that the brain does can be replicated in other ways. The question is, can such biological things be replicated non-biologically?

I don’t see why not, in theory. Only in practice might we have a problem – the human brain is the most complex thing in the known universe, so replicating it in an alternative fashion, but so that it does exactly the same thing, would be quite some task!

And that’s what I like about the show. It makes me think about philosophy. What’s not to love?

Can Janet effectively be human? Can she feel, if she has all the knowledge? Well, I suppose the answer might be wrapped up in what she’s made of.

But, in the future, can we imagine of computers that operate ethereally? Are we sometimes bound by our own limited experience? Science fiction often becomes science fact, and I am sure there are sci-fi writers who have explored such. Could you have fully sentient and knowledgeable entities that had no corporeal form?

I digress… Or do I?

Avatar photo

Jonathan MS Pearce

A TIPPLING PHILOSOPHER Jonathan MS Pearce is a philosopher, author, columnist, and public speaker with an interest in writing about almost anything, from skepticism to science, politics, and morality,...