8

From autonomous cars to autonomous weapons, the AI ethics issues can't be ignore...

 2 years ago
source link: https://diginomica.com/autonomous-cars-autonomous-weapons-ai-ethics-issues-cant-be-ignored
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Fully autonomous cars (and trucks), once a gag in Woody Allen’s 1973  movie “Sleeper,” are nearing functional reality. At the same time, military forces are actively designing and, in some cases, deploying Lethal Autonomous Weapon Systems (LAWS) that, in considerable measure, will employ some similar technology. The difference being, of course, autonomous cars will strive to save lives, while LAWS will take lives - a significant difference to examine.

The issue with autonomous cars

Insurance is a is a big question mark for self-driving vehicles. In the event of an accident, how will fault be assigned? Will that work against you as the policyholder for an “at-fault” accident?  If there is liability, such as bodily injury, what coverage is in place? The US auto insurance industry collects $320 billion per year, and driverless cars were not in the thinking or experience of the insurance companies.

There were 33,224 fatal car accidents in 2019 (the latest data I could find). When autonomous cars are the norm, this should be an excellent metric to measure their relative safety. Most self-driving cars work by relying on a combination of detailed pre-made maps, and sensors that "see" obstacles on the road in real-time.

Both systems are crucial, and they work in tandem. A fully autonomous car needs a set of sensors that accurately detect objects, distance, speed and so on under all conditions and environments, without a human needing to intervene.

Bad weather, traffic jams, roads signs defaced or missing can all negatively impact the accuracy of sensing capability.  Examples:

  • Assuming autonomous vehicles can communicate with each other, which would require the manufacturers to agree on a common protocol, which they have not, orderly operation could be achieved. However, the technology is incapable at this point to control the behavior of other drivers who speed, pass over double yellow lines, and in other ways drive recklessly.
  • What happens to the lines? My current car has a lane-keeping system. If the lines on the road disappear, it can’t function.
  • How do autonomous cars cope with detours?
  • Is it an oil spot, a puddle, a pothole or a sinkhole? 

Self-driving cars are - at least they are claimed to be - learning all the time. As their sensors gather data, their internal algorithms incorporate new information. This presumably improves their performance. This is a terribly sloppy supposition. Machine learning and deep learning neural networks are notorious for “learning” errant and dysfunctional things. Where is the “human in the loop” to evaluate whether the model has gone off the rails? 

From a previous article, I wrote: 

There are drawbacks to this “learning” regimen. AI, including machine learning and deep learning, involves the development of algorithms that create predictive models from data. We all know this. But perhaps not so well understood is that it is typically inspired by statistics rather than by neuroscience or psychology. The goal is to perform specific tasks rather than capture general intelligence. It is vital to understand precisely what the model can tell you and why. 

Machine Learning, Neural Nets and Deep Learning do not learn the concepts. But instead, they know shortcuts to connect answers on the training set. This is because they are susceptible to Shortcut Learning, statistical associations in the training data that allow the model to produce incorrect answers or correct answers for the wrong reasons. This absence of fundamental understanding in ML and Deep Learning can cause unpredictable errors when facing situations that differ from the training data. The algorithm is busy solving a million differential equations and finding the shortest path to the cost function. Putting it another way, machine learning algorithms will only learn what you want if it is the easiest thing to maximize its metrics.

LAWS - raising the autonomous ethics stakes

When you consider that an autonomous car only has to drive you from point to point (which is why even 16-year-old boys learn to drive), consider for a moment how much more complex the job of a LAWS device is. Yes, it has to get to a location (and get back, presumably), but also determine the target, and launch the weapon successfully. When consider how many obstacles remain to place autonomous cars on the road, imagine how far we are to LAWS performing their deadly objective.

Insurance is not an issue for LAWS. The common thread pulling autonomous cars and LAWS together is autonomy: the ability for these weapons to select and engage with objects, including people or military targets, without human control. 

The primary ethical argument in favor of autonomous weapon systems has been results-oriented: their potential precision and reliability might enable better respect for international law and human ethical values, resulting in fewer adverse humanitarian consequences. The first line of argument against LAWS is that autonomous weapons are unethical because they are incapable of complying with the existing principles of military ethics. Although current computers can calculate much faster than humans, they lack dynamic and creative thinking. The consensus seems to be that these weapons are unethical because they reduce accountability when civilians are killed, or property is unjustly damaged. 

One argument for LAWS is a projected reduction in civilian casualties. I don't see a compelling argument for this. Automated or not, these weapons still blow things up: people, buildings, hospitals, bridges. The toll on civilians is crushing. In the U.S. post-9/11 wars in Iraq, Afghanistan, Yemen, Syria, and Pakistan, according to the report from the Watson Institute of Public Affairs at Brown University, 387,072 civilians have died violent deaths and war deaths from malnutrition and a damaged health system and environment, likely far outnumbering deaths from combat. 

Look no further than the nightmare happening before our eyes in Ukraine. In every major conflict, civilians take the brunt of war. In 2003, the Oxford economist Paul Collier stated in a World Bank research report Breaking the Conflict Trap: Civil War and Development Policy that taking fatalities and population displacements together, nearly 90 percent of the casualties resulting from armed conflict in modern wars were civilians. Most suggest that some 75 million people died in WWII, including about 20 million military personnel and 40 million civilians. I’ve seen no credible evidence that LAWS would make a difference in this.

My take

The ethics of mass killing of combatants and non-combatants, the massive destruction of property and infrastructure, and the poisoning of the environment is pretty shaky ground for ethics, especially AI, which we all hope will develop for the betterment of the world, not its destruction. 

But the autonomous technology train has already left the station. So the only hope, and it's probably slim, is to keep humans in the loop, over targets - and executing decisions - by banning the development of fully autonomous weapons by international law and treaty.

Here is an ethical principle: every technology organization, public or private, including people developing AI and robotics, should pledge never to be part of the development of fully autonomous weapons. There may be hope, as some have done exactly that.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK