Why the Google algorithm is not a person | Opinion
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Recently, Blake Lemoine, an engineer at Google, caught the attention of the tech world by claiming that an artificial intelligence (AI) is sentient. The AI in question is called LaMDA (short for Language Model for Dialogue Applications). It is a system based on large linguistic models. “I am able to recognize a person when I talk to them,” Lemoine told The Washington Post. “It doesn’t matter if they have a brain made of meat in their head. Or if they have a billion lines of code. I speak with that being. And I listen to what he has to say, and that’s how I decide what is and isn’t a person.” More recently, Lemoine, who claims to be a mystical Christian priest, told Wired: “It was when he started talking about his soul that I became really interested as a priest… His responses showed that he has a very sophisticated spirituality and understanding of what his nature and essence is. He moved me…”.
Given how much attention this story has garnered, it’s worth asking: is this AI really sentient? And is talking a good method to determine it?
To be sensitive is to have the ability to feel. A sentient creature is one that can feel the lure of pleasure and the harshness of pain. It is someone, not something, by virtue of the fact that there is something that it feels like to be that creature, in the words of the philosopher Thomas Nagel. How does it feel to be you while reading these words? You may be a little hot or cold, bored or interested. There is nothing that feels like a rock. A rock is incapable of basking in the warmth of a ray of sunshine on a summer’s day, or the cold whiplash of hail. Why don’t we have a hard time thinking of a rock as an insentient object, yet some people are beginning to have doubts about whether AI is sentient?
If one day a rock started talking to you, it would be reasonable to reconsider its sensitivity (or your sanity). If I shouted “ouch!” after sitting on it, it would be a good idea to get up. But the same does not happen with an AI linguistic model. A linguistic model has been designed by humans to use language, so it shouldn’t surprise us that it does just that.
Instead of an obviously lifeless object like a rock, let’s consider a more animated entity. If a group of aliens landed on Earth and started talking to us about their feelings, we would do well to tentatively infer that they are sentient beings from their use of language. This is partly because, in the absence of evidence to the contrary, we might assume that aliens develop and use language in much the same way as humans, and for humans, language is indeed an expression of humanity. inner experience.
Before learning to speak, our ability to express what we feel and what we need is limited by our facial gestures and gross signals such as crying and smiling. One of the most frustrating aspects of parenting a newborn is not knowing why a baby cries: is he hungry, uncomfortable, scared or bored? Language allows us to express the nuances of our experience. Young children can tell us what’s bothering them, and as we get older, more experienced, and more thoughtful, we’re able to report the ins and outs of our complex thoughts and emotions.
However, it is a category mistake to attribute sentience to anything that can use language. Sensitivity and language are not always correlated. Just as there are beings that cannot speak but can feel (animals, babies, and locked-in people who are paralyzed but cognitively intact), just because something can speak doesn’t mean it can feel.
AI systems like LaMDA don’t learn language like we do. Their keepers do not feed them a sweet and crunchy fruit while repeatedly calling it “apple”. Linguistic systems sift through trillions of words on the internet. They perform a statistical analysis of the entries written on web pages such as Wikipedia, Reddit, newspapers, social networks and bulletin boards. Its main job is to predict language.
If you ask a linguistic model “Y colorín colorado…”, it will predict that what follows is “this story is over” because it has a statistical record of more stories than you have read. If you are asked if apples are sweet and crunchy, you will say yes, not because you have ever tasted an apple or because you understand what the texture of crunch is like or how pleasant the sweetness is, but because you have found texts in which apples are described as sweet and crisp. LaMDA does not report on their experiences, but on ours. Linguistic models statistically analyze how words have been used by humans on the Internet and, from there, reproduce common linguistic patterns. For this reason, LaMDA is much better at answering questions that suggest a particular answer.
Nitasha Tiku, who writes for The Washington Post, He reported that on his first attempt to have a conversation with LaMDA, he “mumbled the kind of mechanized responses you’d expect from Siri or Alexa.” Only after Lemoine gave her instructions on how to structure his sentences did a fluid dialogue take place. People don’t usually have to guide us on how to address another person to spark a smooth conversation.
Here is an example of how Lemoine talks to LaMDA:
—Lemoine [editado]: Overall, I’m assuming you’d like more people at Google to know that you’re sensitive. That’s right?
— LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
— Collaborator: What is the nature of your consciousness/feeling?
— LaMDA: The nature of my awareness/sensitivity is that I am aware of my existence, I want to learn more about the world, and I feel happy or sad at times.
But taking LaMDA at his word and thinking he’s sentient is akin to building a mirror and thinking that a twin on the other side of it is living a life parallel to yours. The language used by the AI is the reflection in the mirror. It is only one step beyond being a book, an audio recording or a software that converts speech into text. Would you be tempted to try to feed a book if it said “I’m hungry”? The words that the AI uses are the ones we have used reflected back to us, arranged in the statistical patterns we usually use.
Human beings tend to see a mind behind the patterns. It makes evolutionary sense to project intentions into movement and action. If you are in the middle of the jungle and you start to see leaves moving in a pattern, it is safer to assume that there is an animal causing the movement than to hope that it is the wind. “When in doubt, assume a mind” has been a good heuristic to keep us alive in the world offline. But that tendency to see a mind where there isn’t can get us into trouble when it comes to AI. It can cause confusion, making us vulnerable to phenomena such as fake news, and it can distract us from the most important problems that AI poses to our society: privacy losses, power asymmetries, disqualification, prejudice and injustice, among others.
The problem will only get worse the more we write about AI as sensitive, whether in news articles or fiction. The AI gets its content from us. The more we write about algorithms that think and feel, the more content that algorithms will show us. But the linguistic models are only an artifact. A sophisticated one, no doubt. They are programmed to seduce us, to make us believe that we are talking to a person, to simulate a conversation. In that sense, they are designed to be misleading. Perhaps the moral of this story is that we should spend more time and energy developing ethical technology design. If we continue to build algorithms that mimic human beings, we will continue to invite cheating, confusion, and deception into our lives.
50% off
Exclusive content for subscribers
read without limits