25

The Trolley Problem Isn’t Theoretical Anymore

 4 years ago
source link: https://www.tuicool.com/articles/BzyuAv6
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

mAZ3YzE.jpg!web

Photo by Justin Bautista on Unsplash

The Trolley Problem Isn’t Theoretical Anymore

YJNBRbM.jpg!web

Oct 28 ·10min read

You are the conductor of a run-a-way trolley that is hurtling down its track at 85 miles an hour, heading straight for a group of young boys playing on the tracks, blissfully unaware of their impending doom. You realize that you can pull a lever to switch the trolley to an alternate track, saving the lives of these boys. Before you pull the lever though, you see there is a young girl who is playing on the tracks of the alternate route. Pulling this lever would mean ending her life. You have ten seconds until it is too late to decide…

What do you do?

The Trolley problem was a thought experiment first introduced by British philosopher Phillip Oft in 1967. In 1984, this problem was reintroduced in an academic paper by Dr. JJ Thomson. It has been cited over 1300 times.

The good news is that discussions about ethics are becoming more common in computer science classrooms at universities. Engineers are finally beginning to discuss problems about values and fairness when it comes to digital systems and algorithms. What aren’t as highly discussed though, are the consequences — intended or not — of discriminatory systems and biased algorithms that are already in effect and being used by humans every day .

The trolley problem is already being played out by companies like Tesla , Google , Uber , Lyft , Argo , Embark , and General Motors . The problem goes like this :

If a self driving car finds itself in a situation where it has to swerve to save its driver, but swerving left means hitting a child crossing the street, and swerving right means hitting two elderly women crossing the road — which direction should it swerve?

Previously, Google chose the values of deontology: always hit the smallest object no matter what (there was no difference between a trashcan and a baby in a stroller)*. Tesla opted out of accountability; crowd-source human driving data and mimic human driving behaviors. This includes speeding, swerving, and (sometimes) breaking the law.

Why are CS classrooms discussing algorithms and AI theoretically? The technology is here. It isn’t theoretical anymore . It is time to assess the algorithms that already exist in the growing digital landscape. The ones that make decisions that could negatively or positively impact society.

But first, we must discuss the moral frameworks that these systems are built upon.

What is ethics and moral philosophy?

Before we can ethically assess algorithms and machine learning models, we must first discuss the values that are encoded into them. Although there are many frameworks for ethics and moral philosophy, I’m only going to review the most commonly occurring ones:

1. Utilitarianism

This ethical theory is a numbers game. It focuses on the consequences of an action. According to utilitarianism, an action is ethical if it causes the most good/pleasure and the least pain/suffering.

A utilitarian would be okay with harvesting one unsuspecting person’s organs if it meant saving the lives of five people who needed transplants. When it comes to the trolley problem, a utilitarian would always choose to hit the smallest number of people on the road — no matter who they were.

AramQjQ.png!web

scenario from moralmachine.mit.edu

This ethical framework is the easiest for digital systems to adopt, because it’s easy to turn a numbers game into code. There isn’t any room for granularity.

2.Deontology

Deontological theory was born from the famous philosopher, Immanuel Kant. This theory focuses less on the consequences and more on the actions themselves. In deontology, one rule is chosen that becomes the universal law.

The ends never justify the means.

In the case of the trolley problem, this would mean that the conductor must choose one metric for fairness that they never break. This could mean they choose to always save the most lives, the youngest lives, the oldest lives, etc.. No matter what — their metric must always be followed.

In the following image, the deontological rule for a self driving car might be to “ always save the most lives that will contribute to the most overall good.”

JZFRVjn.png!web

scenario from moralmachine.mit.edu

This is similar to how laws are created. One rule covers all cases of a specific action. But, just as is the case with policy, there is a major flaw in deontology: all things in life are contextual . Sometimes following the same rule will result in fair decisions for some and unfair decisions for others. How would our deontological rule account for the following scenario?

IFVFJnm.png!web

scenario from moralmachine.mit.edu

3.Virtue Ethics

Finally, virtue ethics. This moral philosophy focuses less on actions or consequences and rather places all of the pressure on the moral character of the person who does the action. In other words, motivations of an action are the focus.

If the trolley conductor saves the lives of 5 boys, but only so he can swerve the trolley into his ex-girlfriend (who recently broke up with him), his actions aren’t virtuous. Even though he saved five lives, his motivations weren’t pure.

This gives humans greater agency to break rules and perform actions that might be controversial for some, as long as those actions come from virtuous motivations. This does lead to a big problem though:

What is a virtuous motivation?

It turns out that the answer to this question varies widely between people, cultures, and geographic locations.

What are the ethics of the systems that we employ?

Now that we all understand the basics of ethical and moral philosophy, we can apply these concepts to the digital systems that are heavily impacting society today.

A tool that can help us assess the underlying ethical implications of digital systems is the framework of Privacy as Contextual Integrity , created by Helen Nissenbaum. Although this framework was originally intended to help assess digital privacy, it can easily be applied to all digital innovations.

Utilizing some of the techniques from Nissenbaum’s framework, I propose a framework to identify and modify unethical technology. In order to make this framework approachable for everyone, I’ll introduce it as a decision tree.

Let’s call this the ETHItechniCAL Framework:

iaMnmyZ.png!web

In order to make sure we fully understand this framework, let’s put it to the test with a use case:

Assessing a defendant’s risk for returning to crime.

AKA: Assigning a recidivism risk score.

Let’s go back in time to the 1700’s, 1800’s, and 1900’s:

In this scenario, the non-digital alternative for assessing someone’s recidivism risk in court was often just a judge’s opinion. Evidence could be brought to light about past behaviors that might influence a defendant’s likelihood of returning to crime, but someone’s ‘risk assessment’ was an educated guess, at best.

In the past, before statistics and technology were more widely adopted in court, a criminologist, judge, or jury member could simply mark someone as ‘high risk for recidivism’ because they didn’t like their demeanor. Or worse, because they didn’t like their race .

Now fast forward to 1990, where a new digital alternative enters the scene: COMPAS , a software that predicts recidivism risk scores for defendants. It became widely used in some US states.

“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation. They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts, to even more fundamental decisions about defendants’ freedom.” ~ Machine Bias , ProPublica

Unfortunately for COMPAS (and those who suffered the consequences of their software), it turns out their algorithm gave disproportionately higher risk scores to black defendants than it did to white defendants. Their algorithm was deontological , but their rules for assessing risk were unfair for anyone who wasn’t white. The software was unethical. [ 1 ][ 2 ][ 3 ]

Now, let’s imagine a do-over. Let’s pretend that it is 1989 and we are the developers of the COMPAS algorithm. In order to not repeat the past, we decide that before we even begin to select which features we want to use in our training data sets, we are going to focus on the ethics of our algorithm.

We go to our handy ETHItechniCAL framework.

The players:

System:COMPAS software

Non-Digital Alternative:Judge/Jury opinions

If it had been up to me in 2013, here’s how our COMPAS software would have measured up:

7NNJziQ.png!web

Or, for the optimistically minded, at the very minimum, our COMPAS software would have mapped to here:

R3YzQj6.png!web

Here’s my point:In the past, I doubt that the designers of the COMPAS algorithm had a conversation about deontology. I doubt that they researched what their algorithm might do if they had instead chosen a utilitarianism or values-based approach. Honestly, it doesn’t seem like the developers of this software had a conversation about ethics at all . Or if they did, it must have been long after their system became widely used.

COMPAS is not alone. What other systems in place today could have caused much less societal harm by utilizing something like the ETHItechniCAL framework?

Transparency & Explainability — The Keys to Success

Now that we finally understand the ethics and values that our systems are upholding. There are two important steps that should be taken :

1. Make sure the intended values of the system match its reported values (ask users if they think the system is fair or not).2. Explain these values to all stakeholders involved. This includes the users that are negatively/positively impacted by the system, the engineers who built the system, and anyone else who may interact with it.

In the case of the COMPAS algorithm, defendants had a right to know what information about them was causing a high or low risk score. They weren’t given this information. Even worse, judges weren’t given this information either.

Everyone was blindly trusting an algorithm that was just as racist as the hundreds of years of crime data that was fed into it.

Transparency could have helped fix this.

What’s Left?

If you are involved in the creation of any kind of technology, or if you interact with any kind of technology, you should be actively concerned about these issues . Ideally, you should be having conversations about algorithmic ethics in your workplace or in your school. If your classes feel highly theoretical, ask your teachers if you can start using current case studies to drive the conversation.

I understand that these problems aren’t easily solved, but change has to start somewhere . It will likely take a very interdisciplinary approach to assess algorithms and machine learning models for their values.

There’s also a major obstacle that stands in the way:

Companies aren’t willing to assess the ethics of their own algorithms.

Even as I write this, I recognize that the legal departments of large tech companies might laugh at my words. Infamous algorithmic scandals [ 1 ][ 2 ][ 3 ][ 4 ][ 5 ] in the past few years have proved that the tech industry tends to focus on ethics only when it becomes a legal issue.

This is why I propose an alternative approach to ethical assessment: use ethics as a business metric. If users are aware of the values that the systems they are using promote, they will understand the system better. When users aren’t blindsided by values that they weren’t aware of, they will trust a system more .

Trust ties directly into user retention.

Companies are afraid to assess or audit their algorithms in fear of discovering that something is wrong. If they find out they are unethical, they must spend the money and time to fix that problem. This is why most legal teams at companies advise engineers to avoid algorithmic audits whenever possible, unless there is a lawsuit at play or a legal policy that requires compliance.

If we could utilize fairness and ethics as a business metric, maybe companies would be more willing to check themselves. Maybe legal teams would let up.

It’s not a guarantee, but it’s a great start.

Final Words

JjiYre6.jpg!web

Photo by Amogh Manjunath on Unsplash

We’ve debated ethical and moral philosophy for thousands of years. Philosophy is a theoretical field because there are no universal metrics for ‘good’ and ‘bad’ that satisfy everyone. When people with opposing beliefs debate morals and virtue, they are often left with more questions than answers.

With the advent of technology, there is now an opportunity to create systems and artifacts with values built into them.

If we don’t explicitly define which ethical frameworks or values we are selecting for a system, we run the risk of unintended consequences that may be ‘unfair’ for many.

By creating easier to understand guidelines about the values that are selected for a system’s creation, engineers will more easily understand the societal implications of their work. This will make it much easier to explain this information to the users that these systems impact.

Explainability builds transparency. Transparency builds trust.

My goal isn’t to punish tech companies. Rather, my goal is to motivate them to want to be ethical and to want to hold themselves accountable, by utilizing the wonderful ethics conversations that are starting in the classroom. Let’s take these discussions and apply them to problems that already exist.

Together, we can help code with greater intentionality .

Intentional systems reduce the risk of unintended societal harm.

In the trolley problem, we aren’t going to all agree on who we should save or kill . We have different values and different opinions. When it comes to self driving cars, it doesn’t matter who agrees or doesn’t agree, the choice for who to save has already been made for us.

It is time to understand the consequences of these choices. It is time to be transparent about algorithmic design. This isn’t theory anymore. This is reality. When we raise our voice, we have a choice.

Footnote: * My information about Google’s self driving cars is based on a dated article. If anyone works for Google or Waymo and would like to share the current objective function for your self driving car in the case of unavoidable collisions, your help and transparency would be greatly appreciated!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK