Tillbaka till föregående sida

Foto: Markus Spiske/Unsplash
Artiklar
Intervju

Publicerat: 2021-08-09

Interview about artificial intelligence and the future of conflict

Few subjects are more widely discussed in international security than the application of artificial intelligence in military technologies. It is a very consequential issue which will have a big impact on the future of warfare and possibly also the future of the nation state. In this interview, Zebulon Carlander, Society & Defence’s Program Manager for Security Policy, interviews Dr Kenneth Payne, who recently published the book “I, Warbot: The Dawn of Artificially Intelligent Conflict” (Hurst & Co., 2021) which delves into how artificial intelligence will impact the future of defence and strategy. Here is the interview in Swedish. 

Can you explain what you mean by describing AI as a decision-making technology rather than a weapon?

– The first thing to say is that in strategic studies somebody is always having a revolution and claiming that something is about to definitively change warfare forever. I am a bit reluctant to dive into that sort of thing. But, if you remember your Clausewitz, he said that the nature of war was essentially human. It was political and social, and war was something done to humans by humans. What strikes me about AI is that it is inhuman in its decision-making, and that is what is revolutionary. It is not a weapon like an iron sword or a catapult, rather it is a general-purpose technology that can be applied to all manner of activities, including military ones. 

– If you think about a technology, the nearest parallel would be something like electricity or maybe even writing. These are general purpose technologies that can be applied to a whole range of things that humans do, but that have had a big impact on warfare. Writing is a way of outsourcing our cognition. It means you can share it, people can refer back to it, and you can accumulate facts in a very different way than what you can in oral traditions. I think in a similar way AI has the potential to have at least as a dramatic impact on warfare. Because it makes decisions in ways that are fundamentally non-human, and that is a departure from everything that has gone before. 

You write that liberal democracies must balance between confronting authoritarian states and not compromising with their values. Can you explain your thinking?

– The first thing to say is that it can be really hard to do that. Technology and societies and war have always existed in a kind of three-way relationship. Societies generate technology, some of which are used for war, then war changes the society, and technologies can change the societies too. There are all these feedback loops going on between the sources of technologies that society imagines and creates and the impact that those have on war and then, finally, the impact that war has on the societies themselves. Precision guided weapons are a good example here. They got their breakthrough in the 1970s after the Vietnam War, when the United States wanted to use precision guided munitions as a way of offsetting the Soviet Unions conventional weapons superiority. I do not think there was much of an explicit liberal logic in doing that, but there certainly became one. Once you had precision guided weapons, that fed into liberal expectations of how you fight wars. You fight them in a way that is precise, discriminate, that uses minimal force and reduces collateral damage. So the technology shaped the liberal expectation, which then shaped the development of subsequent technologies. 

– I think the same is true for AI, and there is a sense in which AI is already changing liberal societies. In some good ways. We think about its use, or potential use, in nudging health decisions or shaping your consumption choices, even something as simple as managing road traffic networks. So there are upsides, but there are real challenges with AI for liberal societies as well. Those have to do with the public-private sphere. If a state has the capacity to do AI-enabled surveillance or to gather all this data it couldn’t before and analyze it, that is a challenge for liberal societies if they want to retain the quintessence of what makes them liberal. Bias is another challenge for liberal societies as well. We know how AI systems capture a good degree of the human bias and data that is fed into them and replicated. That bias and potential for unfairness is a challenge for liberal societies. 

– Another thing that I’d say about liberal societies and AI is that the history of AI has to this point been largely the history of liberal societies doing computer science and working on developing artificial intelligence. It is a technology that grew up in the United States and to a lesser extent in Western Europe. The Soviet Union did not do much computer science until very late and it struggled with it. But now, for the first time, with China, AI has moved out beyond its liberal origins. I still think liberal societies have a real advantage in developing cutting edge AI systems. Among those advantages are a strong history of scientific innovation and robust intellectual property laws. Especially in the United States, they have this unique culture that is a blend of big federal spending, venture capital, university research, and most recently huge private sector corporations with deep pockets. So there are some big advantages that the liberal west has in developing cutting edge AI that it is not clear to me that even China can easily replicate. 

– Where I do think that non-liberal, authoritarian societies have some potential advantages is AI in connection with biotechnology, which I write about in the book as well. Here it is a matter of ethical concerns. Liberal societies rightly try and safeguard what makes them liberal, particularly when it comes to biotechnological research, especially when it comes to animal experiments, human experiments, and genetic editing type research. They face barriers that places like China and Russia do not necessarily face. There is a good quote from the French Defence Minister on augmented technologies, that “We are doing Iron Man, but we are not going to do Spiderman.” That captures it neatly, I think. But the problem is that if the other side is doing spiderman, where does that leave you? So there are two challenges in everything that I have said for liberal societies. Firstly, it is what are non-liberal societies prepared to do. Secondly, it is the way in which AI, like all technologies, have the potential to change the fundamentals of society. 

In the book you caution against technological determinism. Can you elaborate on your thinking?

– In academic circles, technological determinism is seen as a big no-no. But if you look at a lot of military writing on AI, it is always either “The revolution is nigh” and “It is changing warfare fundamentally”; or it is “Nothing is changing” and “You guys are selling snake oil”. What those two positions do is that they focus exclusively on the technology. They don’t focus on where it comes from and how culture and society shape the source of technology that gets developed and the way in which it is then employed. 

– I think that is true of a lot of the writing today on AI. Also, the reverse is kind of true in a sense of the people who campaign against “Killer robots”. They are very focused on the possibility of AI being used to kill humans with no human intervention, but they just look at the surface phenomenon without looking at the deep underlying trends that are driving us to that point. They don’t dig into the cultural aspects that are generating these pressures, in this case the security dilemma, or into the technological arguments that are driving you inexorably towards that point. Their answer just floats free of both the technology and the social pressures that creates it. So that is what I mean about technological determinism. You’ve got to have a more realistic perspective on how technology emerges from particular cultural values and is then used by those cultural values. 

You put a lot of emphasis on culture and technology in the book. Can you explain the connection between them?

– It is also interesting to think about how military cultures (plural – because there is more than one culture in the military) shape the use of technology. Militaries, for a very good reasons, are conservative and hierarchical. They like to do things as they’ve been done before, because generally what’s been done has worked and proven itself in battle. There are reasons to be conservative and hierarchical because it helps you deal with chaos in battle. But it can sometimes be an impediment to innovation and change. AI challenges the whole range of military activities. In the United Kingdom, we’ve got enthusiasm on the top, but not a lot of follow-through underneath. The question is whether the technology itself creates an unarguable logic for using it. The way in which it does that is by winning wargames, exercises and experimental work. If you create an unarguable logic for change, then change has to follow. So I think military cultures are particularly resistant sometimes to change, but I think that AI has enough impetus behind it to drive change in this case. 

In the book, you make the argument that AI will make a big difference in the tactical area, but you caution against using it as a strategic tool. Can you describe your reasoning? 

– Tactics is a problem that plays very well to AI as we understand it now, which is like a highly polished version of a calculator. It is something that can optimize things narrowly, in a way that plays very well to tactics. It is very similar to how you would train a soldier, it is repetitive. So, in terms of tactical problems, such as moving platforms around, directing fire, or logistical supply, then AI can make a big impact there. The problem is that as you widen the focus, you start to bring in cognitive requirements that are a little bit different, which Clausewitz called the ’genius’ of the commander. You start to bring in the requirement for creativity, imagination and flair. These things are more slippery. Because how do you program creativity into a machine? 

– Now that is not to say that machines can’t make contributions at the strategic level, and I argue that they can do it, as some kind of intelligence filter, winnowing down lots of the information that is coming in, to help the human commanders. But I think that at least where there is time and space to do so, strategy will essentially remain a human endeavour. It may be, though, that there isn’t time and space to do certain things, for example, if the ladder of escalation is moved up very quickly by autonomous systems. It may be that people who are prepared to leave the human out of it for the longest have an advantage. That is a problem, if you are in an escalatory situation and your enemy is prepared to delegate decisionmaking to a machine for reasons of time. 

How do you think smaller states like Sweden can best leverage the advantages of AI in military systems? 

– I do not know much about the state of Sweden’s indigenous AI capacity, the university sector and companies that are doing AI. I would also caveat it by saying that Sweden is somewhat of an atypical small state. It punches above its weight militarily, and it also has an outsized, relative to the size of the county, defence industry, which also has a history of innovation in defence. So I think it is in an enviable position relative to a lot of states. But I think more broadly that there is going to be a real problem for states that aren’t AI innovators and that don’t generate their own cutting edge AI because they will become increasingly reliant on the largess of the states that export to them. 

– The F-35 is a leading example of this. But that is only the first step in a process that I think is going to become much more significant. It is this: AI’s principal advantage is as a decision-making technology and that is the unique selling point. It does not create a faster or more maneuverable fighter jet, but it creates one that decides things faster than its adversary. If so, then you need to be at the cutting edge, because there is real pressure to be there. If my jet thinks a hundred times faster than a human, but yours thinks a hundred and two times faster than a human, then you will win most of the times against my jet. 

– So I need to have the best decision-making technology. Where is that decision-making technology coming from? Probably the United States. And it is most likely going to need updating very regularly, to be able to stay at the cutting edge. It is probably going to be sold as a closed box; that is as a service rather than as a product. So you can have the F-35 but this part of it, you wouldn’t see the code of, and you are going to be wholly reliant on the maker to keep the code up to date. That is fine for now, with codes being updated every five years. But what if the code is being updated every week? Or what if the maker suddenly decides that it does not want to update your code anymore? Or even worse, what if it might reprogram your code to stop you being able to use the technology in the way that you want?

– There is a danger that if you are buying kit that is being updated extremely rapidly, you become a client state of the state that is selling it to you. That is true now, it has been true forever. If you have been importing MiG 21s from the Soviet Union then you are essentially a client state of the Soviet state. But once you got your MiG 21, or if you are Iran, once you got your F-14, you can keep it flying for decades. It is not clear to me that once you’ve bought your next air dominance fighter system, that you are going to be able to keep that going if the United States, China or Russia decides that it does not want you to keep it going. So the problem for the small states is this: If they can’t innovate cutting edge technology, they therefore have to import it. They are vulnerable to code updates, AI updates, from the state they are buying it from. I think we are moving into an era where kit is going to be much more disposable and also where code is going to be replicable across platforms, so you are going to have much more mass. That is a real challenge for small state. Can they move to that model? Not without the help of a larger ally. 

You finish the book by articulating three rules for warbots. These are (1) A warbot should only kill those I want it to, and it should do so as humanely as possible. (2) A warbot should understand my intentions, and work creatively to achieve them. (3) A warbot should protect the humans on my side, sacrificing itself to do so — but not at the expense of the mission. Can you explain your thinking behind these rules? 

– I thought about coming up with rules that echo Isaac Assimov’s three laws of robotics, but that are fit for purpose. I am not sure whether I did, because it is a real challenge. When I say that a warbot should only kill what I want it to, and that it should do so as humanely as possible, those are very subjective. Who is the I? Who is making the decision and when are they making it? Oftentimes, If you are in the middle of the battle, you do not know who you need to kill in order to win the battle. You are also continually changing your assessment on how badly you want to win the battle. Everything is fluid and interacting. When I am making this decision who the warbot should kill, how am I uploading these decisions to the warbot? That is before you get into the technical challenges of the warbot being able to find, fix and kill those people that I have asked it to find, fix and kill. 

-The second rule is about is the problem of unintended consequences. At the dystopian extreme, it is the idea of a sci-fi robot going of the rails and trying to take over the planet. But more realistic is the problem of getting the robot to capture your intentions, so that it follows them in a satisfactory way, and that has been a longstanding problem in AI research. In making the rule, I wasn’t trying to answer the question, just trying to frame it as “this is the challenge, a warbot needs to understand my intentions”. How do you get a warbot to understand your intentions? At the tactical level it is very easy. Anything more elevated, it becomes more challenging. 

-The last rule is about the challenge of conflicting goals – protect my people, win the battle. For Asimov, a robot should always protect human life, above all else. That is the answer to the classic robot rules. But that points to the main ethical difference between civil society and wartime. In wartime you can’t put the sanctity of human life above everything else. The whole point of war is that you have to take risks. Sometimes those risks are extreme and you have to have to make a decision with the full knowledge that people are going to die, including people on your side. If you said to a warbot, that at all costs it should save the human, you’d be hamstringing yourself in a way that you wouldn’t do if you were giving orders to humans. You wouldn’t say that to a military commander, without AI, that at all costs don’t lose anybody. In a nutshell, then, my rules are more a recognition of the scale of the problem. I am not sure I answered the problems of warbots, but my purpose in crafting those rules was to kind of sharpen the focus of the challenge that faces liberal societies like yours and mine. 

Dela

Relaterat material