The weaponization of technology is as old as warfare itself. Throughout history, innovations from dynamite to the airplane have been used for unintended, and often destructive, purposes.
Today’s technological boom—what is often referred to as the ‘Fourth Industrial Revolution’—is ushering in a suite of technologies that have the potential to transform society, but they can also be repurposed for malicious ends. The growing range of artificial intelligence-based innovations are no exception.
Addressing the peace and security challenges posed by new weapons and means of warfare is an important responsibility of the United Nations and my office, the UN Office for Disarmament Affairs.
However, when I first started my job as the High Representative for Disarmament Affairs twelve months ago, I must admit I was not well-versed in the nuances of AI, including its impact on international peace and security.
For me, it was the AI for Global Good Summit, hosted by the International Telecommunication Union in Geneva last June, at which I began to realize just how much our actions now will shape the staggering advances taking place in computational science and robotics.
AI-based technologies are revolutionizing transportation, manufacturing, healthcare, and education, creating potential for radical improvements in the lives of the world’s most vulnerable people.
When it comes to international peace and security, the same technology that underlies these innovations could be used to great benefit, from the verification of treaty compliance—such as the pioneering work being done by the Comprehensive Nuclear-Test-Ban Treaty Organization—to applications in peacekeeping operations and the delivery of humanitarian assistance.
However, it is also true that this technology could be weaponized, with the potential to transform existing weapons and their delivery systems, as well as decision-making structures. Military applications of AI could include autonomous weapons systems and command and control structures for use in every domain of warfare, including cyber- and outer-space, and possibly even in nuclear arsenals.
As countries seek to become technological leaders in this field, we are witnessing the beginnings of a 21st century version of the Cold War space race. We need to make sure it doesn’t become a 21st century version of the Cold War arms race.
For autonomous weapons systems, these questions include how to ensure such systems are used in full compliance with international humanitarian and international human rights law. As with all networked devices, AI-based enhanced weapons systems could also potentially be vulnerable to outside interference or hacking, further increasing the chances of miscalculation and confusion.
The democratization of technology and its unprecedented dissemination has been a boon to millions, and AI-based applications are no exception. However, care also needs to be taken in regard to how this technology could be used. Malicious non-state actors such as terrorist groups are showing an increasing aptitude for using technology for their own purposes, such as drone attacks or online recruitment.
As Secretary-General Guterres recently said, “Our challenge is to maximize the benefits of the technological revolution while mitigating and preventing the dangers.”
Doing so requires a multifaceted response. But before we—and by we, I mean the international community—even start, we need to have some serious conversations to better understand how this technology is already being used, as well as its long-term ramifications. We need to make sure those deliberations are inclusive—levels of understanding about AI and its applications vary, and we all need to get on the same page.
We also need to develop the multi-stakeholder coalitions necessary to address the challenges and opportunities of the AI revolution. The private sector is a key driver behind much of this technology, and many leading companies in the field have already said they fear that advances made could be weaponized or repurposed in ways that challenge our collective ability to respond. In addition to industry, governments should also welcome discussions with humanitarian organizations and academia on the cutting edge of AI-based innovations.
Finally, there is a spectrum of potential responses that might be necessary—from “soft law” approaches such as industry codes of conduct, to more formal transparency and confidence measures, to, if necessary, legally binding instruments. Each of these possibilities should be properly examined.
I firmly believe that the United Nations remains the forum in which the global community can address the pressing peace and security challenges of the day, including those posed by AI and other emerging technologies.
Some of the risks and challenges associated with the onset and various applications of new technologies are already being addressed at the UN. For example, the Convention on Certain Conventional Weapons serves as one such forum for countries to discuss possible applications of AI with a range of stakeholders, as well as policy options to address concerns.
As the Secretary-General said, now is the time for all of us to come together and “consider what should constitute responsible state behavior and responsible innovation,” including in the field of AI-based innovations and applications.
United Nations Under-Secretary-General and High Representative for Disarmament Affairs
Ms Izumi Nakamitsu assumed her position as Under-Secretary-General and High Representative for Disarmament Affairs on 1 May 2017. Prior to taking on this post, Ms. Nakamitsu served as Assistant Administrator of the Crisis Response Unit at the UN Development Programme since 2014.
She has many years of experience within and outside the UN system, most recently as Special Adviser Ad Interim on Follow-up to the Summit on Addressing Large Movements of Refugees and Migrants between 2016 and 2017. She was previously Director of the Asia and the Middle East Division of the UN Department of Peacekeeping Operations between 2012 and 2014, and Director of the Department’s Division of Policy, Evaluation and Training, from 2008 to 2012.
Born in 1963, Ms. Nakamitsu holds a Master of Science degree in Foreign Service from Georgetown University in Washington, D.C., and a Bachelor of Law degree from Waseda University in Tokyo.
She is married and has two daughters.