10

Are Large Language Models sentient?

 2 years ago
source link: https://naim-kabir.medium.com/are-large-language-models-sentient-d11b18ef0a0a
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
1*O-_tuMMJuYIeIcRx-232Aw.jpeg

Photo by DS stories

Are Large Language Models sentient?

What we actually mean when we ask that question

Google just suspended the engineer Blake Lemoine for publishing conversations with the company’s chatbot development system, LaMDA.

According to Lemoine, these conversations are evidence that the system is sentient. Google disagreed, citing that there’s plenty of evidence against the claim of sentience.

This all strikes me as rather odd, mainly because the question of sentience is an unfalsifiable one. All the evidence in the world can’t prove the presence or absence of it—making it a useless technical question to pose in the first place.

All the evidence in the world can’t prove the presence or absence of sentience

It’s fun for a philosophical faff at the ol’ Parisian salon, sure, but not worthy of any serious energy. Especially not institutional energy.

1*YsTLFqZvbkGHL8YQrEjDYA.jpeg

“And that, Bob, is why we should give logistic regressions legal representation.” Photo by Shane Rounce on Unsplash

Many of you might think it is in fact the most important question to ask, and I understand where you’re coming from. The notion of sentience seems crucial for thinking about ethics, fairness, and rights.

Those are important conversations to have. But thinking in terms of sentience isn’t the right way to go about it.

I’ll tell you why—but first, we have to define terms.

Sentience and Mary the super-scientist

What is sentience, anyway?

For the purposes of this discussion, we’ll say sentience is the ability to “feel feelings”. By that I mean the ability to have subjective experiences—or what philosophers might call “qualia”.

To dig into this idea a little deeper, I need to introduce you to Mary.

1*5QKw4_1wNs0ZVlKE_unYjA.jpeg

Mary the super-scientist. We should probably buy her a new Macbook or something. Photo by cottonbro

Mary is a genius. The smartest human being alive or dead, in fact. She decided to study neuroscience at a young age and this has given her encyclopedic knowledge of how the brain functions.

Mary really only has the one shortcoming: she can’t see color. There are no cones in her eyes, just rods. Her world is grayscale.

She can describe how a 700-nanometer wavelength of light hits a normal human retina, how it trips its cones, kicks off a chemical signaling pathway that courses through the bipolar cells, up the optic nerve, and eventually to some place on the visual cortex at the back of a person’s skull—but for all that technical knowledge of physics, chemistry, and biology, she still can’t see the color red.

0*pRPzek8UdG9ZBHtZ.png

Red is not a JPEG. Red is a feeling.

That red—the feeling of it—is qualia.

It is not a wavelength, or a photon, or a neural encoding.

It is utterly divorced from the physics of your brain or the substrate it stems from.

Red is a sensation. A mental phenomena.

That’s what I mean by sentience: it’s the capacity to sense and feel. It’s to have qualia of anything at all.

All you zombies

Given that definition of sentience, you can probably say with confidence that you are sentient. The words of this article are ringing around in your mind as qualia right now.

But what if I asked you to prove that I was sentient? If I gave you an unlimited budget, space-age technology, and the greatest minds on Earth, what experiment would you run?

Many of you might turn to neuroscience. You’d map out my brain cell-by-cell, probing every gap junction and synaptic vesicle. You’d do little lesion studies to see what behaviors turned on and off, cutting until I fell asleep or woke up.

1*VS8tnzwKB49QKLVq9eOlnQ.jpeg

Get outta my brain, you monster. Photo by National Cancer Institute on Unsplash

But what would those experiments tell you? They could reveal how certain brain structures relate to behaviors and task performance, sure. But what could they actually tell you about my internal subjective experience?

Our thought experiment with Mary showed us that the physics and the mental phenomena we experience are conceived as two separate things! Anything you learn about the physics and biology of how I tick won’t tell you what I’m actually feeling underneath, or if I’m feeling anything at all.

The human body would function exactly as-is even if it weren’t imbued with a subjective experience. Everyone but you might be a “philosophical zombie”—just a meat machine with no conscious experience, finely tuned to attend to complex tasks by millions of years of evolution.

You might be certain about your own sentience, but if you apply that to anyone else you’re making a massive leap and assuming that just because something behaves like you, it must be feeling like you, too.

You might be certain about your own sentience, but if you apply that to anyone else you’re making a massive leap

You also probably make an assumption in the opposite direction—for all the things that don’t behave like you, you assume they cannot feel like you. You walk through a world of inanimate objects everyday and likely don’t think that they have any sort of subjective experience. But there are no grounds to disprove that idea. For all you know, rocks are sentient too.

1*XwGUIeRZ9pEmBAIqzl4XPA.jpeg

“Sup.” Photo by Zoltan Tasi on Unsplash

So: are Large Language Models sentient?

By now, asking whether Large Language Models are sentient should feel exactly the same as asking if a rock is sentient, or if someone you pick off the street is sentient.

The answer will always be a huge, resounding: “ ¯\_(ツ)_/¯”.

The only way to be sure if something is sentient is to be it. It’s not a question you can actually get to the bottom of.

The only way to be sure if something is sentient is to be it

So why do we get so riled up over the idea? We ask about animal sentience and machine sentience all the time, and we assume sentience for fellow human beings.

1*qcZ55weINkgoBnpJe7nMAA.jpeg

“Hey, meat-zombie!” “Steph I told you to stop calling us that.” Photo by Yan Krukov.

All this talk of sentience isn’t just aimless philosophizing.

It largely comes from a place of human compassion.

If humans can imagine something might feel the way we do, we get the urge to give those entities rights and protections just in case they’re capable of feeling pain or fear. We do it because we see ourselves in them, and seek to treat them the way we’d want to be treated.

Blake Lemoine wants to protect LaMDA because he would feel guilty if it were shut down. Animal rights activists fight fiercely because they feel distress if animals display signs of pain or agitation.

We don’t talk about sentience to determine the actual lived experience of a creature (or inanimate object) at all—we can never actually know.

We talk about sentience so we can formalize ways to protect ourselves from the pain of our own compassion.

We talk about sentience so we can formalize ways to protect ourselves from the pain of our own compassion

So what are we really asking when we wonder if a Large Language Model is sentient?

We’re asking: “Are these models convincing enough to make the average human being feel distressed for it? If so, should we do something about it?”

This is a far more answerable question, and one entirely in our control.

For example, as a general policy, we might dictate that AI responses should be clearly inhuman or unrelatable, so that we don’t ever trigger human distress. Or we might decide to make sure all AI are incorrigible jerks—like Yannic Kilcher’s GPT-4chan—so we feel great if they’re mistreated.

We may decide to go the opposite direction and give machines legal representation just to curb people’s pain in seeing their favorite machine learning models go through things they can’t bear to see, like being in violent video games as cannon fodder or in virtual snuff films or what-have-you.

There are plenty of options there, but it has to be with this correct question in mind—not about “sentience”, but about the toll our inventions take on the human psyche.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK