DoD

Experts call for public debate on ‘lethal autonomous weapons systems’

Dozens of business executives and technology experts in artificial intelligence and robotics have signed an open letter to the United Nations calling for public deliberation on the potential threats that could arise from “lethal autonomous weapons systems.”

The letter urges a U.N. Group of Governmental Experts “to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.” The U.N.’s Conference of the Convention on Certain Conventional Weapons established the GGE, which Ambassador Amandeep Singh Gill of India will chair.

The letter, in part, also reads:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

The letter appears at a time when militaries worldwide are developing new methods of warfare that increasingly involve autonomous systems guided by machine intelligence, including the use of artificial intelligence, machine learning and cognitive analytics.

In the U.S., DARPA is looking to develop a mosaic warfare capability, which will entail “dynamic, coordinated and highly autonomous composable systems” that could be scaled against adversaries to create asymmetric advantage.

In an August interview with Fifth Domain, Spark Cognition CEO Amir Husain talked at length about the advent of hyperwar, which he characterized as “AI-fueled, machine-waged conflict.” Husain noted that the U.N. letter, which has been referred to by some observers as “Elon Musk’s petition to ban AI,” is neither Musk’s petition, nor a petition to ban AI nor a proposal for a specific solution. Musk, CEO of SpaceX and Tesla Motors, is one of the letter’s dozens of signatories, including Husain.

Husain said that Professor Toby Walsh of the University of New South Wales in Australia organized the letter and that it calls only for more debate on the topic of autonomous weapons. Husain added:

Along with dozens of others, I am a signatory to the letter because I believe in the importance of this issue and the fostering of debate. However, as I have extensively written, a blanket ban is unworkable and unenforceable. The history of nuclear proliferation gives us a tangible example of why, while the Prisoner's Dilemma from game theory provides a theoretical framework with which to think about the issue. I believe the solution – as much as one exists at this stage – is to redouble our investment in the development of safe, explainable and transparent AI technologies. Over the long arm of history, scientific progress is inevitable, and for me, that is not frightening. It is, instead, the most optimistic thought to illuminate the dark canvas that depicts the human condition to this point in history.

For years, the U.S. military has been working on a so-called Third Offset Strategy, an Obama administration phrase that, so far, the Trump administration has not broadly adopted in its public speeches. The U.S.’s Second Offset occurred from the 1970s through 1980s and involved stealth aircraft, precision-guided weapons and computerized command and control.

Few officials talk publicly about the specific types of technologies the third offset will involve, but military analysts assume it will entail cyber, electronic and space warfare capabilities. Autonomy and AI are assumed by many experts to be fundamental components.

The secrecy around specific third-offset capabilities is not unusual. The U.S. did not openly discuss the second offset’s stealth capabilities – developed in the 1970s – until 1989.

Former Deputy Defense Secretary Robert Work said in 2015 that the third offset would entail humans and machines working closely together rather than autonomous weapons systems acting independently of humans. The vision of human-machine collaboration has also been called the Centaur Strategy. Work reiterated the theme in an October 2016 talk at a Center for Strategic and International Studies conference.

But given the breakneck speed of technological innovation and other countries’ rapid capabilities development, even this near-past imagining of close human-machine collaboration may evolve to entail more machine autonomy than originally envisioned, as Husain has speculated.

In an article coauthored by U.S. Marine Corps Gen. John R. Allen (ret.) and published last month in the U.S. Naval Institute’s Proceedings Magazine, Husain wrote:

What makes this new form of warfare unique is the unparalleled speed enabled by automating decision making and the concurrency of action that become possible by leveraging artificial intelligence and machine cognition. … In military terms, hyperwar may be redefined as a type of conflict where human decision making is almost entirely absent from the observe-orient-decide-act (OODA) loop. As a consequence, the time associated with an OODA cycle will be reduced to near-instantaneous responses. The implications of these developments are many and game changing.

Musk’s signature on the open letter and the media attention it received are not surprising. The billionaire technology entrepreneur has repeatedly warned of the “existential threat” of AI, calling it “the biggest risk we face as a civilization.” Musk’s critics, such as Google co-founder Larry Page and Facebook CEO Mark Zuckerberg, have downplayed Musk’s warnings about AI in favor of a more optimistic vision. Some critics have suggested that Musk’s dire warnings are part of his marketing strategy.

Despite AI’s detractors, Husain and other technologists view a future with AI as inevitable and consider attempts to slow or stop AI research and development as futile. In the August interview with Fifth Domain, Husain said, “[H]yperwar is a consequence of the militarization of artificial intelligence. It is here and will only become more significant with time. We must understand it and factor the implications of broad, widely deployed autonomous systems into our planning and our thinking.”

Recommended for you
Around The Web
Comments