4

Autonomous weapons systems - a cautionary use case for evaluating AI risks

 1 year ago
source link: https://diginomica.com/autonomous-weapons-systems-cautionary-use-case-evaluating-ai-risks
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Autonomous weapons systems - a cautionary use case for evaluating AI risks

By Neil Raden

March 30, 2023

Dyslexia mode

Humanoid robots working with headset and notebook © Phonlamai Photo - shutterstock

Lethal Autonomous Weapons Systems (LAWS) are machines that can independently select and attack targets without human intervention.

The presumption is to provide a faster and more efficient way to conduct military operations. However, their use raises significant ethical questions, such as who is responsible for the actions of these machines and what happens when mistakes are made.

The ethical implications of LAWS can be discussed from different perspectives. One of the main arguments against the use of such weapons is the violation of human dignity. Do machines that can independently select targets and attack them without human intervention undermine the value of human life?

Accountability is another ethical concern. In the event of a mistake or malfunction, who is responsible for the actions of these machines? The use of LAWS raises the possibility of accidental or intentional harm to innocent civilians, and it may not always be clear who is responsible for such actions. A robot does not possess the same reality as humans. Therefore it is likely that robots and humans will never share a language. Terms like "Human" and "Harm," are so semantically ambiguous that a robot cannot understand them. It is impossible to program into a robot every possible action and consequence.

The use of AI in lethal autonomous weapons also raises the question of whether such weapons can comply with international law. The principles of international humanitarian law require that the use of force must be proportional, discriminate, and avoid unnecessary suffering. The use of autonomous lethal weapons challenges these principles, as machines cannot judge the proportionality of a given attack or the civilian nature of a target.

In 2012, the Department of Defense issued a directive that established a policy framework for developing and using autonomous weapons. The directive states that "autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

The policy framework requires that autonomous weapons systems be developed and used to comply with international law, including the principles of distinction, proportionality, and military necessity.

Despite these policy guidelines, using autonomous weapons in the United States military is not without ethical concerns. Drones are a form of autonomous weapon, and their use has raised ethical questions about their effectiveness in achieving military objectives and the civilian casualties they have caused. The United States has been involved in military operations in Iraq, Afghanistan, and Syria, where drones have been widespread.

Drones have also led to the development other autonomous lethal weapons, such as armed ground robots. These robots are designed for ground combat and can autonomously select and engage targets. Using such robots raises significant ethical questions, such as the possibility of civilian casualties, accountability for their actions, and compliance with international law.

The use of autonomous lethal weapons by the United States military raises concerns about the ethical implications and the impact on international relations. The use of such weapons can be seen as a form of aggression by other countries, leading to an escalation of conflict and may also lead to other countries developing similar armaments, resulting in an arms race that threatens global security.

In sum, using AI in autonomous lethal weapons raises significant ethical concerns. The development and use of such weapons challenge the principles of international law

What departments in DoD have published guidelines for responsible AI?

The Department of Defense (DoD) has published several guidelines for responsible AI. Some of the notable departments within the DoD that have published guidelines for responsible AI include:

  1. Defense Innovation Board (DIB): The DIB is a federal advisory committee that provides independent advice and recommendations to the Secretary of Defense. In 2019, the DIB published a set of ethical principles for developing and deploying AI in the military. These principles include responsible use, equitable treatment, human oversight, robustness, and security.
  2. Joint Artificial Intelligence Center (JAIC): The JAIC was responsible for accelerating the adoption of AI across the DoD. The JAIC published several guidelines for responsible AI, including the JAIC Ethical AI Framework, which provides a set of ethical principles and best practices for developing and deploying AI.
  3. In February 2022, by integrating JAIC and Defense Digital Services (DDS), the Chief Data Officer, and the enterprise platform Advana into one organization, DoD created Chief Digital and Artificial Intelligence Office (CDAO) for data, analytic, and AI-enabled capabilities for be developed and fielded at scale. This foundation ensures that the DoD has the necessary people, platforms, and processes to continuously provide business leaders and warfighters with agile solutions.
  4. Defense Advanced Research Projects Agency (DARPA): DARPA is a research and development agency within the DoD. DARPA has established several programs focusing on responsible AI, including the Explainable AI (XAI) program, which aims to develop AI systems to explain their decision-making processes to human operators.
  5. Department of Defense AI Center of Excellence (DCoE): The DCoE was established in 2021 to lead the DoD's efforts to adopt AI responsibly. The DCoE is responsible for developing and implementing policies and procedures related to the ethical use of AI in the military.

These departments within the DoD are actively working to develop and implement guidelines for responsible AI. As the use of AI in the military continues to grow, the DoD must continue to prioritize ethical considerations in developing and deploying these technologies.

Rapid sophistication and deployment of LAWS from China, Russia, the United States, and other nations demand policy to understand the risks, including the inevitable accidents and unintended escalation. To what extent can a LAWS deal with immediate uncertainty? As Mike Tyson wisely opined, "Everyone has a plan until they get punched in the mouth." What are the essential capabilities of an autonomous system in such circumstances? A wide range of ethical, legal and moral dilemmas exist without humans in deadly decisions. Or is there?

In the article Autonomous Weapons Systems and the Laws of War, Michael T. Klare Michael T. Klare writes:

The potential dangers associated with the deployment of AI-empowered robotic weapons begin with the fact that much of the technology involved is new and untested under the conditions of actual combat, where unpredictable outcomes are the norm. For example, it is one thing to test self-driving cars under controlled conditions with human oversight; it is another to let such vehicles loose on busy highways. If that self-driving vehicle is covered with armor, equipped with a gun, and released on a modern battlefield, algorithms can never anticipate all the hazards and mutations of combat, no matter how well 'trained' the algorithms governing the vehicle's actions may be. In war, accidents and mishaps, some potentially catastrophic, are almost inevitable.

Extensive testing of AI image-classification algorithms has shown that such systems can easily be fooled by slight deviations from standardized representations, In one experiment, a turtle was repeatedly identified as a rifle. Such systems are vulnerable to trickery, or "spoofing," and hacking by adversaries.

This danger is all the more acute because, on the current path, autonomous weapons systems will be accorded ever-greater authority to make decisions on the use of lethal force in battle. Although US authorities insist that human operators will always be involved when armed robots make life-and-death decisions, the trajectory of technology is leading to an ever-diminishing human role in that capacity, heading eventually to a time when humans are uninvolved entirely. This could occur as a deliberate decision, such as when a drone is set free to attack targets fitting a specified appearance ("adult male armed with a gun"), or as a conditional matter, as when drones are commanded to fire at their discretion if they lose contact with human controllers. A human operator is somehow involved in launching the drones on those missions, but no human orders the specific lethal attack.

Maintaining ethical norms

This poses obvious challenges because virtually all human ethical and religious systems view the taking of human life, whether in warfare or not, as a supremely moral act requiring some valid justification. Humans, however imperfect, are expected to abide by this principle, and most societies punish those who fail to do so. Faced with the horrors of war, humans have sought to limit the conduct of belligerents in wartime, aiming to prevent cruel and excessive violence. Beginning with the Hague Convention of 1898 and subsequent agreements forged in Geneva after World War I, international jurists have devised a range of rules, collectively, the laws of war, proscribing certain behaviors in armed conflicts, such as the use of poisonous gas.

Following World War II and revelations of the Holocaust, diplomats adopted additional protocols to the Hague and Geneva conventions to define better belligerent obligations' obligations in sparing civilians from the ravages of war, measures generally known as international humanitarian law. So long as humans remain in control of weapons, in theory, they can be held accountable under the laws of war and international humanitarian law for any violations committed when using those devices. What happens when a machine decides to take a life, and questions arise over the legitimacy of that action? Who is accountable for any crimes found to occur, and how can a chain of responsibility be determined?

These questions arise with particular significance regarding two critical aspects of international humanitarian law, the requirement for distinction and proportionality in using force against hostile groups interspersed with civilian communities. Distinction requires warring parties to discriminate between military and civilian objects and personnel during combat and spare the latter from harm to the greatest extent possible. Proportionality requires militaries to apply no more force than needed to achieve the intended objective while sparing civilian personnel and property from unnecessary collateral damage.

My take

These principles pose a particular challenge to fully autonomous weapons systems because they require a capacity to make fine distinctions in the heat of battle. For example, it may be relatively easy in a sizeable tank-on-tank battle to distinguish military from civilian vehicles. Still, in many recent conflicts, enemy combatants have armed ordinary pickup trucks and covered them with tarps, making them almost indistinguishable from civilian vehicles. Could a hardened veteran spot the difference? Probably, but an intelligent robot? Unlikely.

Similarly, how does one gauge proportionality when attacking enemy snipers firing from civilian-occupied tenement buildings? For robots, this could prove an insurmountable challenge. Advocates and critics of autonomous weaponry disagree over whether such systems can be equipped with algorithms that can distinguish between targets to satisfy the laws of war.

"Humans possess the unique capacity to identify with other human beings and are thus equipped to understand the nuances of unforeseen behavior in ways that machines, which must be programmed in advance, simply cannot," analysts from Human Rights Watch (HRW) and the International Human Rights Clinic of Harvard Law School wrote in 2016.

Another danger arises from the speed with which automated systems operate and plans for deploying autonomous weapons systems in coordinated groups or swarms. The Pentagon envisions when many drone ships and aircraft are released to search for enemy missile-launching submarines and other critical assets, including mobile ballistic missile launchers. US adversaries rely on those missile systems as an invulnerable second-strike deterrent to a US disarming first strike. Should Russia or China ever perceive that swarming US drones threaten the survival of their second-strike systems, those countries could feel pressured to launch their missiles when such swarms are detected, lest they lose their missiles to a feared US first strike.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK