If we want artificial “superintelligence,” it may need to feel pain

- Aristotle argued that there are three kinds of intelligence and modern biology talks in terms of three layers: sentience (feeling), sapience (reflection), and selfhood.
- The philosopher Jonathan Birch argues that we should consider sentience to be far more widespread than we do, and, second, that sentience might be essential to “higher” forms of intelligence.
- Big Think spoke with Birch about how artificial intelligence presents an interesting and somewhat sinister counterexample to all known intelligence.
Scientists love a good classification system. It’s important to give things labels and it’s fun to step back and look at your beautiful taxonomic tables. Given that Aristotle is considered one of the first scientists, it’ll come as no surprise that he was just as fond of diving the world into categories. He divided animals into “those with blood” and “those without.” Some were “lives on land” and others “lives on sea.” But one of the most famous Aristotelian categories was his division of the soul. It’s a division that has defined the Western conception of all living things.
Aristotle argued there were three types of souls — each building on the last. First, is the vegetative soul: the basic, automatic functions of growth and nutrition. Your pot plants have this soul. Second, is the sensitive soul, which involves perception and awareness. Your dog has this. Finally, there is the rational soul — with intelligence, consciousness, and imagination. You have this.
Given that Aristotle was writing so long ago, it’s remarkable how little has changed in how we understand life. The big difference, however, is how widely these categories will stretch. To make sense of vegetative and sensitive souls, Big Think spoke to Jonathan Birch about his new book The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI. Not only does Birch believe we give too little credit to non-human animals, but he argues that the dawn of AI might be a unique moment in our planet’s history — the birth of a new type of soul.
Three layers of consciousness
Today, few scientists would use Aristotle’s language, and many would balk at the use of “soul” in an empirical setting. Birch noted that the modern discourse around living things goes back to “a philosopher called Herbert Feigl, writing in the 1950s, who said there are these three layers of consciousness. There’s sentience, then sapience, and then selfhood.”
Sentience, Birch said, is “immediate raw experiences of the present moment — your senses, your bodily sensations, your emotions, here and now.” For example, when a mouse is repulsed by a noxious smell, that’s evidence of sentience.
Sapience is more sophisticated. “It’s a kind of overlay on the top,” Birch said. “It’s our ability to reflect on our experiences for something — not just to hurt, but for us to have a thought of that hurt and say something like ‘that was the worst pain I’ve ever had in my life.’” When Buddhists claim “pain is inevitable and suffering is optional,” they’re talking about sapience. Sapience is the mind’s reflection on what sentience provides.
Finally, there is selfhood, which Birch described as “our awareness of ourselves as beings with a past and a future. And this is a really sophisticated capacity.”
Birch’s book is all about sentience, and he argues that we should adopt a far broader understanding of what he called “sentience candidates.” Birch believes that “the class of sentience candidates is very wide indeed. It’s not just all vertebrate animals but also octopuses, crabs, lobsters, shrimps, insects. They’re all sentience candidates because there’s empirical evidence of a serious kind that points to a realistic possibility that they are feeling things and it would then be irresponsible to ignore when we’re making decisions that affect these animals.”
The artificial leapfrog
Sentience interests philosophers for two key reasons. The first, as Birch’s book considers, is the ethical question: If certain species can feel pain, that has implications on how we should treat them. Second, sentience is interesting because of what Birch calls the “overlay.” It’s commonly thought that the higher consciousness we see in Homo sapiens is built upon other, nested forms of consciousness: Our rationality depends on sapience, which depends on sentience. The evolutionary story is one of development, and our brain is literally built in a way that tells that story.
Now we have artificial intelligence, and it’s “artificial” in such a way that it jumps all those evolutionary hurdles and demonstrates some kind of “intelligence.” Here, Birch suggests that “it’s entirely possible that in AI we will see this huge decoupling where we have very high levels of intelligence that might even surpass the human level without any underlying sentience at all.”
It’s here that Birch suggests something curious and quite possibly the stuff of dark science fiction.
“It might be that to get superhuman intelligence, you do need some level of sentience. We can’t rule that out either; it’s entirely possible. Some people argue that that kind of real intelligence requires sentience and that sentience requires embodiment. Now, there is a view in philosophy, called computational functionalism, that [argues] sentience, sapience, and selfhood could just be the computations they perform rather than the body they’re situated in. And if that view is correct, then it’s entirely possible that by recreating the computations the brain performs in AI systems, we also thereby recreate the sentience as well.”
Birch is saying three things here. First, it’s reasonable to suggest that “superintelligence” requires sentience. Second, we could potentially recreate sentience in AI with certain computations. Therefore, if we want AI to reach “superintelligence” we would need it to be sentient. We would need AI to feel things. ChatGPT needs to know pain. Gemini needs to experience euphoria.
The fact that underlies Birch’s book and our conversation is that intelligence is not some deus ex machina dropped from the sky. It is not some curious alien artifact uncovered in a long, lost tomb. It’s nested within an unfathomably long evolutionary chain. It’s the latest word in a long sentence. But the question Birch raises is: Where does AI fit in the book of evolved intelligence?