1

Can AI Help Us Be Better People?

 1 year ago
source link: https://nautil.us/can-ai-help-us-be-better-people-260216/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Can AI Help Us Be Better People?



One question for Jon Rueda, a doctoral student in moral philosophy at the University of Granada.



Article Lead Image

One question for Jon Rueda, a Ph.D. candidate and La Caixa INPhINIT Fellow at the University of Granada, where he studies the intersection between bioethics, ethics of emerging technologies, and philosophy of biomedical innovations.

In Body Image

Photo courtesy of Jon Rueda

Can AI help us be better people?

Yes. I have published a new article with a colleague, Bianca Rodriguez, in which we argue that, indeed, AI assistants could help us improve some aspects of our morality. Some AI models aim to make us more aware of some of the limitations of our psychology when we are trying to decide what to do, or provide relevant factual information. Some of these AIs start by knowing your values and preferences, and then try in concrete moments to offer a best course of action. These are controversial in some ways, because they are not going to improve your capacity to make your own decisions. We analyze another, more promising system called the Socratic assistant, or SocrAI, which is based mainly on the idea that through dialogue we can advance our knowledge, think about complex moral issues, and improve our moral judgements. 

This AI-based voice assistant hasn’t been developed commercially. But I know there’s interest because one of the proponents of this idea, the philosopher Francisco Lara, told us that some companies reached out to him about it. This interest is going to grow. Because of the very famous ChatGPT, there is an increasing awareness about how AI is improving. We feel that we are having a real conversation with an AI system. 

The AI-based Socratic assistant we discuss in our paper wouldn’t necessarily be trained on Socrates’ words as we know them from Plato’s writings—it would just try to emulate his Socratic method. It’s based on a more procedural understanding of ethics, which is the more philosophically provocative aspect of our paper. This Socrates is not going to tell you, “You should do that,” in a concrete moment, but will help you improve your reasoning—to consider empirical facts, to think more logically and coherently. So it won’t tell you what is good or wrong. Socrates never says what is the truth, the concrete truth. But through the dialogues, he tells us what the weak points of your arguments are. Through irony, he tells you that what you have said can be counter argued. And in that process you learn and improve your moral reasoning.

We are optimistic in our article, but there are also many concerns that we are not dealing with, like data protection: What will happen with the data that is being created through the interaction with the users? This data is also important and will help to improve the system.

These systems could also have some kind of problematic tendency to shape the autonomy and agency of the people. AI could influence our character, and manipulate or nudge us toward certain types of behavior. There could also be a problem of deskilling moral abilities. Imagine that we create a kind of dependence with these systems, and if these systems do not protect our autonomy—if people start deferring to the advice of AI systems when making ethical decisions—in the long term that could be negative. So it’s difficult to have a balanced appreciation of this technology.

Would it be good to have children grow up with a Socratic assistant? I have the intuition that we should be more protective with children because they are still developing. They are creating their own autonomy, and it’s more sensible to try to not offer technologies that will limit or narrow it. But on the other hand, children are already exposed to other kinds of technologies that can manipulate them, that shape their preferences and perspectives. So the relationship between children and new technologies is something that is already happening. And of course, AI applications could have a role in this, and if we give children good tools to improve their moral abilities, that would be good, but also we should be more concerned about the deleterious effects.

Some people argue that, because of our evolutionary history, we are more biased toward those closer to us in time and space, and that we have a lot of tendencies to be partial, and that AI could help us to be more like an ideal observer. This view in some sense is also problematic, because we know that AI systems have different kinds of biases. Some of these biases are particular to AI, but they are very common and very similar to the biases that we have in our psychology. In that sense AI could not only reproduce but also amplify human biases, so we should not be super optimistic about using AI to overcome our limitations of our moral psychology. nautilus-favicon-14.png?fm=png

Lead image: Mariart0 and Sabelskaya / Shutterstock

  • Brian Gallagher

    Posted on January 30, 2023

    Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @bsgallagher.

new_letter

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!

Newsletter Signup (Embedded)

Email
If you are human, leave this field blank.

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK