6

Why ‘Is LaMDA Sentient?’ Is an Empty Question

 2 years ago
source link: https://albertoromgar.medium.com/why-is-lamda-sentient-is-an-empty-question-2683eac9d08
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Why ‘Is LaMDA Sentient?’ Is an Empty Question

And 3 barriers that separate us from the truth — whatever it is.

1*m16LMSs23EGb6xXWsQmqNg.jpeg

Photo by local_doctor on Shutterstock

The hot topic in AI this week comes from the now ex-Google senior engineer Blake Lemoine. He claims that Google’s large language model LaMDA is sentient. In a viral article for the Washington Post, Nitasha Tiku wrote about how Lemoine concluded, after a span of a few months of interaction with the bot, that it was a person. He then tried to convince others at Google of the same thing, but was told that “there was no evidence that LaMDA was sentient (and lots of evidence against it).” Lemoine was put on paid leave for violating the confidentiality policy, which could devolve into a termination.

Let’s see what is LaMDA and why his claim is empty.

LaMDA (Language Model for Dialogue Applications), announced at Google’s I/O conference in 2021, is the company’s latest conversational AI capable of managing the “open-ended nature” of human dialogue. At 270B parameters, it’s a bit bigger than GPT-3. I was trained specifically on dialogue with the objective to minimize perplexity, a measure of how confident is a model in predicting the next token. Being a transformer-based language model, no responsible AI researcher would take Lemoine’s claim of sentience seriously.

But neither Lemoine nor LaMDA are the only contributors to the AI sentience/consciousness debate that’s getting increased attention lately (I will use “sentience” and “consciousness” interchangeably throughout the article, as it’s pretty clear that’s the intention behind Lemoine’s claims). Ilya Sutskever, OpenAI’s Chief Scientist, claimed in February that today’s large neural networks could be “slightly conscious,” to which I gave my opinion in this piece. He never disclosed what he meant by “slightly” or “conscious,” and provided no evidence or explanation.

The fact that high-profile people working on tech companies driving research in AI are starting to make bold claims about AI sentience/consciousness will have consequences. As I see these conversations happening more often I can’t help but wonder where we’re going with this. As these debates get to the general public, many people will start to believe these claims, lacking the knowledge or expertise to even begin to healthily doubt them.

Many great AI researchers are trying to combat this potentially dangerous trend. For instance, Emily M. Bender, Timnit Gebru, and Margaret Mitchell wrote a great paper in which they dubbed large language models as “stochastic parrots;” regurgitating internet text data in a seemingly reasonable order isn’t the same as understanding or intelligence, let alone sentience.

In this article, I won’t defend my opinion or enter into the debate of whether AI is sentient — or when it’s going to be, if ever. My purpose here is to describe what we should do to go from stating mere opinions that reveal absolute ignorance on the topic of sentience/consciousness in general, and in AI in particular, to find the answers we seek — now or in the future — whatever those turn out to be.

I describe three barriers that prevent us from learning to ask the right questions on the topic of AI consciousness (although the arguments are valid generally) and interpreting the answers that would give form to a reality that’s now out of reach. The first barrier is the one I consider critical to tackling now. The other two can wait, as we’ll only face them in the long term, if ever.

First barrier: Human gullibility

We, humans, are universally biased and overconfident when it comes to our beliefs. They’re dependent on many factors (education, knowledge, culture, desires, fears, etc.) and do not necessarily point to absolute truth (when there’s one), which makes them biased. On top of that, we tend to assume the methods we use to build those beliefs are way more infallible than they are in reality, which makes us overconfident. We end up with a set of biased — when not plain false — beliefs which we overtrust blindly. Our reality is thus often made up of illusory certainties.

That’s what most likely happened to Lemoine. He has been silent these past days during the storm, but earlier today he explained in a tweet that he couldn’t back up his opinion on LaMDA’s sentience because “there is no scientific framework in which to make those determinations.” He then clarified: “My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.” So, an illusory certainty.

His overconfidence in biased beliefs convinced him that a model built with the sole objective to predict accurately the next token given a history of previous ones is sentient and can interact with humans with communicative intent. AI ethics researcher Margaret Mitchell wrote a great thread as a response to the WaPo article in which she explains why, from a psycholinguistics perspective, it’s easy to dismiss Lemoine’s claim. She argues that language models learn language in an “observational way” — in contrast to socializing through “back and forth” interactions, like humans — so they never learn “communicative intent.” Intent comes from perceiving the minds of others, from the social side of language, which LaMDA lacks.

But neither Lemoine nor Sutskever are alone in their beliefs. After reading Lemoine’s conversation with LaMDA and the comments on Medium and Twitter, I’ve seen a bunch of, at the very least, positive doubts on the question of LaMDA’s sentience. Many people from the AI community have tweeted these days about it, mostly to criticize Lemoine in one way or another. However, people from all backgrounds are now expressing their opinions and many are agreeing with Lemoine — even if allowing room for uncertainty. Although for now, most don’t believe his claims, people are beginning to interpret intent in language models’ words, and therefore a spark of sentience. As Mitchell says, “we’ll have people who think AI is conscious and people who think AI is not conscious.”

So, the first barrier is precisely what moved Lemoine to make his claim and made people express their opinions as if they had any value: our tendency to base our beliefs on fragile grounds while expressing them with absolute certainty. What’s wrong with Lemoine’s claims isn’t the content (i.e. that LaMDA is sentient), although most people think he’s mistaken — me included. What’s wrong is the lack of scientific rigor with which he arrived at such a conclusion. Perception and intuition based on an unreliable approach aren’t scientific tools.

I don’t think it’s his opinion that disqualifies him from talking about AI sentience, but the fact that he’s gullible as a human, like everybody else, and the fact that he didn’t take the measures to overcome that limitation through scientific rigor. Claims that are based on a combination of perception and intuition, mixed with an unrigorous and unstandardized — and certainly unrepeatable — way of testing are empty, as they can’t be disproved. If he had had the exact same conversation and claimed the exact opposite — that LaMDA is certainly not sentient —, I could be writing the exact same argument and it would apply equally well (although no one would have batted an eye).

This is the main reason why any debate on AI sentience that is built on opinion instead of science is just hubris in terms of finding the truth.

More generally, the number of comments we make — and the degree of confidence with which we express them — on any topic (and more so in those that could have a notable impact on public opinion) should be correlated with the amount of time dedicated to developing the underlying science or to building knowledge on existing literature. Of course, if what we want is to have a good time and feel the unique intellectual stimulation that comes from asking and pondering big questions without trying too hard to answer them, then that’s fine.

However, we have to be careful because these debates have many spectators that build their beliefs on what they hear and read — again, proving my point. Even if AI isn’t near sentient or superintelligent, people will anthropomorphize it as Lemoine did and build their beliefs on top of that.

This is only going to become more pervasive from here on, and because I don’t think people will stop talking about these things at this level of unscientific discourse, the only safe course of action is a complete and decisive rejection of these types of claims.

Second barrier: Definition and measurement

So let’s build the science. Once we agree that discussions at that level of rigor — mostly happening via Twitter these days — are unfruitful at best and dangerous at worst, we can get to the second stage and face the second barrier. We want a set of standard, valid, and reliable scientific tools to measure sentience/consciousness. But to measure anything adequately, we first have to have at least a working definition of what it is we’re measuring.

In the piece I wrote about Sutskever’s claim, I said that “consciousness — not unlike intelligence — is a fuzzy concept that lives in the blurred intersection of philosophy and the cognitive sciences” and it has proven elusive, without a consensual definition to this day. Many hypotheses and models for human consciousness have been developed throughout history. The one in which we could fit Lemoine’s claims, as well as Sutskever’s, is panpsychism, which defends that “the mind is a fundamental and ubiquitous feature of reality.” In the panpsychist’s view, everything is potentially conscious, thus also an AI.

But panpsychism is just an attractive idea, nothing more, and consciousness “remains in the realm of ill-defined prescientific concepts.” Although we can generally agree on the central features of consciousness (e.g. sense of self, theory of mind, etc.), it gets slippery on the boundaries. As cognitive neuroscientist Anil Seth puts it, “the subjective nature of consciousness makes it difficult even to define.” So, for now, consciousness is scientifically undefined and therefore objectively unmeasurable.

From here, one possibility is that we eventually arrive at a consensual definition for consciousness. That’s the easy scenario. Another possibility, the most probable given the history on the topic, is that consciousness will remain ill-defined and we’ll have to bypass that limitation somehow.

The latter scenario reminds me of the motives that made Turing design the Imitation Game — now called the Turing test — in his seminal 1950 paper “Computing Machinery and Intelligence,” in substitution of the question “do machines think?” He knew that, given the undefined nature of the word “think,” the question was too ambiguous to be meaningful, and therefore out of reach of scientific inquiry. If we can’t define consciousness, like Turing couldn’t define “thinking,” we may be able to find a set of substitutes we can actually measure, as he tried to do with his (in)famous test.

Now it’s generally accepted that the Turing test isn’t the right tool to measure machine intelligence, so the AI community has stood up for the task with updated tests that build on Turing’s legacy. One such test is the coffee test, ideated by Apple’s co-founder Steve Wozniak. It states that a robot would be considered intelligent if it could go into a generic kitchen and make a cup of coffee. A more general approach, proposed by professor Gary Marcus and others, is to revisit the Turing test “for the twenty-first century” and design a series of tests to evaluate different aspects of intelligence (among which the Winograd Schema Challenge is the most popular).

To study AI sentience we should make parallelisms with this line of work. Intelligence and consciousness aren’t that different in terms of scientific research, so it may be plausible to design a reasonable set of tests to indirectly measure consciousness this way. As I argued previously, I think we should “define concrete, measurable properties that relate to the fuzzy idea of consciousness — analogous to how the Turing test relates to the idea of thinking machines — and design tools, tests, and techniques to measure them. We could then check how AI compares with humans in those aspects and conclude to which degree they display those traits.”

But the story doesn’t end here. Having a good tool doesn’t necessarily mean we can interpret and understand the results from its measures.

Third barrier: Human cognitive limits

Let’s say we have the right approach (scientific inquiry with rigorous experiments instead of opinion debates on social media) and the right tools (a consensual definition of sentience/consciousness and a carefully designed set of tests that exhibit construct validity and reliability to measure it).

What makes us think that’s enough to interpret the answer to the question of AI (or human) sentience?

Let’s think of the double-slit experiment in quantum physics (I won’t explain it here, but it’s mindblowing). It’s a great example that highlights how, even if we have the right approach and tools, interpreting the empirical data can be extremely tricky. Not because we’re doing anything wrong, but maybe because our capacity to understand is at its limit.

Measuring tools are essential to developing science, but in no way they’re enough. They provide the data, but it’s a mind which makes sense of that data and gives it a meaning that fits in the puzzle of fictions and stories on which we’ve built our civilization and our collective understanding of the universe.

One could argue that we don’t need to make sense of everything we measure. If a theory works because its predictions fit the measurements, that’s enough. However, I — like many others — believe that science’s goal isn’t about predictions or descriptions, but about explanation. We should aim at understanding consciousness, and not merely predicting whether an AI has it or not.

If we ever get to this third barrier, we may find that our cognitive capabilities play a definitive role in whether we have access to that truth. This section is more speculation than anything, but I’ll ask you this: what makes you think that human intelligence (whether individual or collective) is more — or exactly equal — than the amount of intelligence needed to understand everything? Maybe some questions, like the meaning of a weird quantum effect or what it means to be conscious, are, as Noam Chomsky would say, mysteries that belong beyond our cognitive limits.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK