AI and National Security

With the onset of AI-powered technologies like DeepFakes – which can generate synthesized media with amazing realism – it is of utmost importance to consider the holistic impact of such technologies on national security and democracy. It is equally important to question the usage of AI in building lethal autonomous weapons systems (LAWS) by military and defense organizations around the world, and potentially serious consequences of doing so on our society. In this article, we discuss these critical aspects of AI from different viewpoints.

DeepFakes and Their Impact

To begin with, one might argue that synthesized images, audios or videos have been around for decades, so do DeepFakes really warrant any serious concern? The answer lies in following recent events fuelled by DeepFakes around the globe:
 
  • DeepFake video used by a Flemish Social Party in Belgium to influence national elections in 2018,
  • New Year’s address by President of African nation Gabon, labeled as DeepFake by the opposition, sparked a (failed) military coup in the country,
  • Doctored video of Nancy Pelosi, the speaker of the U.S. House of Representatives, falsely making her appear drunk at a news conference (video tweeted by President Donald Trump, and spread widely by his supporters), and
  • Numerous pornographic DeepFake videos of women and even children, aimed at degrading and dehumanizing vulnerable sections of our society.
 
This abundance of evidence against AI-powered DeepFakes makes it amply clear that such technologies, if gone unchecked, can have serious consequences for any society and/or nation.
 
The main factors behind the troubling rise of DeepFakes are simply a) the surprisingly increasing accuracy of AI algorithms that are capable of rendering realistic and compelling synthesized content, which was impossible to achieve with earlier technologies, and b) the ease with which DeepFakes can be created with modest amount of data, even by people without any AI or other technical knowledge. So, in essence, a minor boy or girl with access to a computer and internet can easily create a DeepFake video using images from their Facebook or Instagram accounts and feeding them into freely-available online DeepFake applications. Again, this ‘convenience’ of fake content creation was not possible with earlier technologies.
 
What’s more troubling is the fact that detecting a DeepFake is as hard as it is easy to create. Even with the state-of-the-art AI research methods (at the time of writing this article), it is quite difficult to detect whether a given video or image is DeepFake or authentic. Put differently, while it is possible for a minor to create a DeepFake from his/her dormitory room using a computer, internet and Facebook/Instagram account, it requires a set of experienced AI experts, a forensic team, highly sophisticated AI algorithms and elaborate computing infrastructure to detect a DeepFake.
 
Different types of drones can already be used as weapons, but in the future with advanced AI technologies they could be equiped to use lethal force autonomously. Photo: Free_styler / Alamy Stock Photo  
Different types of drones can already be used as weapons, but in the future with advanced AI technologies they could be equiped to use lethal force autonomously.
Photo: Free_styler / Alamy Stock Photo

AI and Lethal Autonomous Weapons System (LAWS)

Another aspect of AI pertaining to security is the usage of AI-powered technologies in military and defense organizations. A major application of AI in such organizations is to process raw data – images, text, audio – using advanced AI algorithms and transform raw data into useful intelligence. For example, the U.S. Department of Defense started Project Maven – also known as Algorithmic Warfare Cross-Function Team – in 2017 in collaboration with Google, with the objective to automatically identify objects in a large set of images and videos using AI that could potentially be used to improve drone strikes and other lethal attacks in the battlefield. Over 3000 employees of Google signed a petition in protest against the involvement of the company in this project, as a result of which Google promised not to renew the contract for this project when it expires in 2019. However, it is not far-fetched to assume that similar applications of AI will continue to be built in different countries in order to be used in the battlefield in one form or the other; desirable or not, this is inevitable.
 
The gravest of all concerns, however, is the impact of AI on the advancement of Lethal Autonomous Weapons Systems (LAWS). The core idea behind LAWS is that once activated, LAWS would identify, search, select, and attack targets without any human intervention. AI provides powerful, highly-accurate methods to do exactly that- identify, search and select a target. To illustrate, consider a scenario where a fleet of microdrones is trained using AI technologies (like Computer Vision and Machine Learning) to identify particular human targets in the neighboring state. The fleet could be launched from hundreds or even thousands of kilometers away. The microdrones enter the state, penetrate the walls and windows, avoid being shot owing to their small size and built-in advanced warfare features like anti-sniper feature, find the human targets and ‘execute’ the mission. All of this without having to involve human troops, launch missiles or drop bombs.
 
Based on this scenario, one might ask two questions. First, how realistic is this scenario from a technological perspective, i.e., do we have the technology to make this scenario possible? Well, almost yes. AI provides most of the methods required to implement this scenario, and the accuracy of those methods is sharply increasing with time. Second, aren’t there any international regulations to avoid this scenario from taking place in the real world? In March 2019, the United Nations chief said that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”. However, as of today, such international law does not exist. There are ongoing efforts in the UN to put a ban on the development, production and use of LAWS. Until November 2018, 28 countries supported the ban, but 12 countries—including the U.S., the U.K., and Russia—opposed even negotiating a treaty on LAWS. Given these facts, it may be speculated that the development of LAWS using AI-powered technologies is probably underway in certain countries, which is highly alarming.
Sea Hunter is an autonomous unmanned surface vehicle launched in 2016 and currently in testing. If the test are succesful such crafts may be armed and used for anti-submarine and counter-mine duties. Photo: U.S. Navy Photo / John F. Williams
Sea Hunter is an autonomous unmanned surface vehicle launched in 2016 and currently in testing. If the test are succesful such crafts may be armed and used for anti-submarine and counter-mine duties.
Photo: U.S. Navy Photo / John F. Williams

Looking Ahead

AI can be a boon to our society in a myriad of ways, and there is no doubt that its effective utilization can make our society better and safer for everyone. However, it can also be put to adverse use, especially from the security perspective, that could lead to serious consequences for national as well as global security. The need to act faster on issues like LAWS and DeepFakes has never been greater before. As the UN chief said, “The world is watching, the clock is ticking and others are less sanguine. I hope [we] prove them wrong.” Let’s hope so too!

Writer Nidhi Singh
Writer

Dr. Nidhi Singh is Director of Research at Elisa Corporation in Helsinki. Prior to joining Elisa, she worked with Fortune 500 as well as mid-sized companies for over 15 years in various R&D and leadership roles. Her focus has been on applying AI and machine learning for solving complex, large-scale IT problems in domains like intelligent automation of customer services, online fraud detection in e-commerce, and energy optimisation for data centres. She received her Ph.D in computer science, and has been a reviewer for a number of premier AI conferences and journals.

Lisää aiheesta:

Venäjä laski huhtikuussa vesille maailman suurimman sukellusveneen, ydinkäyttöisen Belgorodin. Sukellusvene pystyy kantamaan kuutta Poseidon-torpedoa. Torpedot voidaan varustaa ydinkärjellä ja niitä on mahdollista käyttää kauko-ohjattuina tai itsestään ohjautuvina. Kuva: Oleg Kuleshov/Tass/GettyImage

2020-luvun puolustusagenda

Sääntöpohjaisen maailmanjärjestyksen rapautuminen on luonut epävakautta ja sotilaallisen voiman merkitys maailmanpolitiikassa on jälleen korostunut. Uudet teknologiat ja digitalisaatio tulevat vaikuttamaan asejärjestelmien kehittämiseen.

Lue artikkeli »