Imagine a meeting of the US President's National Security Council where a new military adviser is sitting in one of the chairs - if only virtually, because this adviser is an advanced artificial intelligence (AI) system. It may sound like science fiction, but the United States may at some point in the not-too-distant future have the capacity to generate and use this type of technology. The AI adviser is unlikely to replace the traditional members of the National Security Council - which currently consists of the secretary of defense, secretary of state and chief of staff. However, the presence of AI at that table could have some fascinating - and challenging - consequences for the way decisions are made. The effects could be even more significant if the US knew that its adversaries had similar technology at their disposal.
To understand how the spread of artificial intelligence might affect national security decision-making at the highest levels of government, we designed a hypothetical crisis in which China imposed a blockade on Taiwan, and then assembled a group of technology and regional experts to consider the opportunities and challenges that would the inclusion of AI brought in such a scenario. In particular, we explored how the spread of advanced AI capabilities around the world could affect the speed of decision-making processes, perception, misperception, groupthink and bureaucratic politics. Our conclusions were not always in line with what we expected.
AI can slow down decision-making
Because AI systems can be able to accumulate and synthesize information faster than humans and identify trends in large data sets that humans might miss, they can save valuable analysis time by offering human decision makers a better-informed basis for assessment. As Deputy Secretary of Defense Kathleen Hicks argued in November 2023 that "AI-based systems can help speed up decision-making by superiors and improve the quality and accuracy of those decisions."
However, during our workshop discussions, several ways in which AI can work in the opposite direction were highlighted.
In reality, AI systems are only as good as the data they rely on, and even the best AI is biased, makes mistakes, and breaks down in unexpected ways
First, while AI systems can help organize and search data, they also produce more data. This means that I can ask as many questions as I can answer. Decision makers in a crisis would have to spend valuable time evaluating, integrating, and establishing the credibility of these additional data sources and other AI system products. In fact, during our fictitious crisis in Taiwan, when we offered our experts a hypothetical AI assistant that could provide different possible courses of military action and explain their likely consequences, the experts immediately wanted to know more about the underlying AI system so they could interpret its recommendations. . They needed to understand why the system was making certain recommendations before they could have some degree of confidence in the suggested actions. They also wanted to compare AI recommendations with more traditional sources of information - specifically with real human experts around the table. This meant that AI became just another voice in the process, which also had to gain the trust of decision makers.
The spread of AI could also slow decision-making by creating uncertainty about adversary intentions and forcing policymakers to consider whether and how AI can shape adversary actions. For example, deepfake videos can influence a crisis in a variety of ways, such as fake news reports aimed at pressuring domestic public opinion or obscuring an adversary's operations or motives. Even if policymakers outright reject low-quality or questionable deepfake videos—a reasonable possibility given the vast amount of fake information already out there on social media—the public backlash against advanced deepfake videos could still increase public pressure on the White House for more decisive action. actions. Balancing this public outcry and the uncertainty inherent in an environment with widespread use of artificial intelligence would likely hinder quick decision-making.
In our scenario, for example, we considered the public's reaction to a fake video of the Taiwanese president being arrested by Chinese security forces. It's easy to see how such a video, even if intended to indicate that China has dealt the decisive blow in the crisis, could immediately put pressure on congressional leaders to act more forcefully. Uncertainty regarding the authenticity of the footage given China's advanced AI capabilities could complicate and slow down decision-making processes. Even if incendiary information is proven to be false, it may be too late to stop the avalanche of public demands, and national security decision makers could find themselves in a position with very little room for maneuver.
AI can prevent groupthink... or make it worse
AI's capacity to slow down the decision-making process could have negative consequences if it results in the United States losing its initiative, taking a reactive position that means it is always one step behind its adversary. However, it can also have advantages.
One of these advantages is that, when used effectively, AI-supported systems can challenge the basic assumptions of leaders. Decision makers in a crisis would likely be reluctant to leave their final decision to an AI assistant, but the assistant could still be useful if it could offer unconventional ideas, offer the “red team” preferred strategies, or ensure that decision makers have considered all main alternatives or key variables. When designed and used in this way, AI systems could strengthen the decision-making process by breaking the groupthink that people often resort to in time-constrained situations. AI could also help overcome decision-making pitfalls, such as the tendency of individuals to focus on the first options presented or the tendency of individuals to prioritize the most recent information, by expanding the range of options considered. Including and evaluating additional courses of action would be time-consuming and could further slow down the deliberation process, but it could be worth the effort if it resulted in better decisions.
Unfortunately, AI could also have the opposite effect in terms of encouraging groupthink, especially in situations where decision makers have a high degree of confidence in the capabilities of the AI system. In that situation, overconfidence in technology compared to the frailties of the human mind may result in decision makers leaning towards one view - that of AI. It is not difficult to imagine that, for example, an intelligence analyst might be hesitant to question AI if the system is perceived to be omniscient. One of our experts compared AI at a meeting of the National Security Council to the situation in which Henry Kissinger is sitting at the table. In other words, the pressure to agree with the AI system's recommendations could be great even in a group environment, especially when pressed for time, and this could result in even the most experienced experts with the most innovative ideas being sidelined.
Clearly, this is a situation to avoid, but to prevent AI from taking over, it may not be enough to just keep a human in the decision-making process.
AI can reinforce existing bureaucratic advantages
AI Assistants for national security decision-making would not be neutral about the roles of the main agencies that currently shape key policies and other decisions - the State Department, the Department of Defense, and the Intelligence Community. On the contrary, these systems could influence the weight and influence of some of these agencies to the detriment of others.
Since AI systems are largely shaped by the algorithms and assumptions built into them, they are likely to reflect the biases of their creators, even if unintentionally. As a result, the bureaucracy that develops and owns the system can gain more power in the decision-making process. An AI assistant created by the Department of Defense or the Intelligence Community may recommend a different military action than a system developed by an agency outside that administration would recommend.
This might not be a cause for concern if all government agencies were equally well positioned to develop AI systems, but this is not the case. Instead, bureaucracies with the most resources are most likely to develop the most advanced and capable systems. If this means that the Ministry of Defense develops and owns the most powerful AI platform, the result could be greater influence of defense in the decision-making process during a crisis.
Misperception
Because AI systems offer systematic ways of sifting through large amounts of information, they are often thought to reduce misinterpretations. However, during our discussions, several reasons why this may not be true were pointed out and the ways in which some sources of misperceptions may be amplified by the spread of artificial intelligence, which may ultimately escalate the crisis, were highlighted.
In his political science classic, Perception and Misperception in International Politics, Robert Jervis argues that people tend to view the actions of others as more centralized, coordinated, and deliberate than they really are. When it comes to AI, this tendency to assume intentions that aren't actually there can be reinforced by widespread trust in the accuracy of AI tools.
It is not difficult to imagine that, for example, an intelligence analyst might be hesitant to question AI if the system is perceived to be omniscient. One of the experts compared AI at the meeting of the National Security Council to the situation in which Henry Kissinger is sitting at the table
We presented a scenario where decision makers have intelligence indicating that an adversary (in this case China) has incorporated an AI system deep into the decision-making process and excluded humans from that process. An adversary might present this as a way to demonstrate a commitment to a particular course of action, such as a readiness to escalate. In this case, the AI system can serve as a kind of hand-binding mechanism, previously binding the opponent to a kinetic action.
In that scenario, the uncertainty due to the presence and role of the AI system made it significantly more difficult for our experts to interpret the adversary's intentions. In particular, it became unclear whether the opponent's moves were determined by the AI or by a human. In a real crisis, American policymakers would likely be equally uncertain whether machines or humans are on the other side of the physical or virtual field. Uncertainty about the role and presence of AI would also make it difficult to send and interpret signals, increasing the risk of misinterpretation and misjudgment and creating a perfect storm for unintended escalation even in cases where both sides would prefer to avoid conflict.
In the example of the Taiwan scenario, US decision makers' assessments of whether AI is making decisions for Beijing could influence their interpretations of China's actions and the shaping of their response. If decision makers know that an adversary is using AI systems, the prevailing tendency would likely be to view risky or aggressive behavior as a deliberate design of that system. If the moves of the adversary AI system are automatically interpreted as deliberate, without fully considering alternative explanations, the chances of escalation increase. In fact, a separate experiment conducted by Michael Horowitz and Eric Lewin Greenberg showed just that: participants were more willing to retaliate when an enemy AI-based weapon accidentally killed Americans than when a human operator was involved, showing a greater tendency to forgive human error than error. machine.
Training and previous experience are essential
Of course, in reality, AI systems are only as good as the data they rely on, and even the best AI is biased, makes mistakes, and breaks down in unexpected ways. In the end, they may be more accurate than human experts, especially in cases where context is very important. The amount of prior experience a decision-making group has with an AI system and how well trained it is in its capabilities could ultimately be the deciding factor in whether the AI's effects are beneficial or not. Consequently, there is a great need for training in the use of these tools, a fact that is increasingly recognized in, for example, the draft policy guidelines for the use of AI in government issued by the administration of US President Joe Biden in November 2023.
Hands-on experience with AI tools and decision-making capabilities can educate users about the limitations of such systems, and improve their skill and confidence to apply them quickly and usefully in time-constrained situations. Such training can also inform potential users about the contexts in which a certain AI tool also works well for those when it can fail with explanations, preventing misuse that can have negative outcomes. Finally, training that includes information about adversary AI systems can help decision makers determine adversary capabilities and intentions, including how they may create opportunities and challenges.
Clear AI rules will strengthen stability in crisis situations
This initial research on the impact of expanding AI capabilities on crisis decision-making identifies some opportunities, but also several risks. Training can mitigate some of these risks, but AI will always have a complicated relationship with the decision-making process and decision-makers - it will never be an easy plug-in, nor will it replace the need for humans to make high-level strategic choices or choose between different sources of information.
All of this adds to existing arguments in favor of efforts to establish some sort of AI regulation akin to an arms control regime—a set of rules and acceptable uses that would govern the development and use of AI between the US and its adversaries, particularly China. Given the risks and challenges that arise as AI systems become more commonplace on and off the battlefield, such a regime could have a stabilizing effect if it could reduce some of the uncertainty that causes misinterpretation.
The challenge, of course, will be the adoption of a set of principles on which all relevant parties must agree, as well as a mechanism for determining compliance. This challenge is greatly amplified by the fact that the leaders in AI innovation are commercial firms, not governments, as well as the high speed at which AI systems advance and evolve. The Biden administration has drafted policies to guide the military's use of AI and established the AI Security Institute to predict and prevent dangerous uses of AI technology. While there is some sort of alignment between the US and its key allies on these issues, to have any real impact any AI weapons control regime would have to include China. The two rivals held preliminary talks on the security and management of artificial intelligence in May 2024, but given the strained relations and limited dialogue between Washington and Beijing, it is unlikely that significant progress will be made in the near future. However, American decision makers should push this issue whenever possible with partners who are willing to cooperate.
carnegieendowment.org
Translation: NB
Bonus video: