BEYOND BELIEF: THE IMPERATIVE TO DEVELOP EMPOWERED MILITARY AI

Advanced AI seems to tap into some primal human fears.

An empowered military AI (EMAI) that independently makes lethal decisions is scary. Ancient mythology is full of stories of creators being destroyed by their creations, as when the Olympian Gods overthrew the Titans. Killer machines are a mainstay of science fiction. Long before Michael Crichton’s Westworld and James Cameron’s Terminator, Samuel Butler’s 1872 novel Erewhon described an isolated civilization that had banned complex machines out of a fear that technology would someday supplant humankind. Butler quotes one Erewhonian philosopher, “I fear none of the existing machines; what I fear is the extraordinary rapidity with which they are becoming something very different to what they are at present.”

Advanced AI seems to tap into some primal human fears. Risk expert David Ropeik highlights thirteen “fear factors” that lead humans to be more afraid of something, and advanced AI has eight of them: lack of control, trust, and choice; the fact it is man-made; its uncertainty; its potential for catastrophe; its novelty; and the personal risk it poses to us in potentially taking our jobs (or our lives). These factors make empowered AI particularly frightening, encouraging a denial of its possible implications. We want humans to perform better than machines, and we do not want machines to make life or death choices; but these are normative arguments, and wishful thinking should not masquerade as technological reality.

We must not confuse the normative argument against empowered military AI systems, compelling as it may be, with an assessment of the inherent technological potential of AI. There is a vast gulf between “we should not do it” and “we cannot do it,” and the history of war repeatedly demonstrates that we often do things that we should not do. In any case, moral or ethical concerns should not limit the development of advanced, empowered military AI. Such normative objections have a dismal record of limiting the development of horrible ways for humans to slaughter each other. We fare better in creating structures for governing the use of military technology after it has already proliferated, especially in situations in which both sides of an adversarial relationship have the technology.

Perhaps very advanced military AI systems will face inherent performance limitations and humans will remain the center of military decision-making. If this is true, different nations will struggle with similar challenges as they develop empowered military AI systems. Yet we feel confident that no such limitations exist. Why? We have three reasons for doubt.

First, as we argued in a previous essay, human decision-making capabilities are limited by human biology. Yes, we are biological marvels, and our ability to adapt and create has led to extraordinary achievements in art, science, and technology. We have explored the universe and the atom. Yet, we experience reality indirectly, mediated through our imperfect senses. Our biology anchors our intelligence. We tire, make mistakes, get emotional, and forget. We judge ourselves as intelligent only because we lack a higher comparison point. When we evaluate intelligence, looking down is easy. We can recognize lower levels of intelligence in other animals, and (perhaps more controversially) sometimes recognize a lower level of intelligence in another person. But looking up is much more difficult. We have no experience with intelligence surpassing that of, say, top theoretical physicists, and most of us cannot comprehend that level of brilliance. Even the greatest scientific leaps, such as the development of quantum mechanics, were by brilliant humans constrained by human limitations. The entire spectrum of human intelligence remains remarkably narrow. Compared to an advanced AI, Einstein and the village idiot will be indistinguishable. Our technological advancements may amplify our capabilities, but they do not make us superhuman, and they do not transcend our limitations.

Second, even if humans enjoy some persistent advantages in decision-making, current plans to keep a human “in the loop” in advanced military AI systems are unrealistic. As we wrote in a prior article, the complex, high-tempo operations of future wars will overwhelm human decision-makers, creating constant problems with decision bottlenecks. To cope, human decision-makers will resort to risky shortcuts that will ultimately undermine the AI systems. As Secretary of the Air Force Frank Kendall observed, “If the human is in the loop, you will lose. You can have human supervision and watch over what the AI is doing, but if you try to intervene you are going to lose.”

Finally, we believe that the United States’ potential adversaries are likely to be very motivated to push the boundaries of empowered military AI, for three reasons: demographic transitions, control of the military, and fear of the United States. The path of technological development is deeply influenced by non-technological forces. As scholar of technology and former defense official Michael Horowitz observes, “The relative impact of technological changes often depends as much or more on how people, organizations, and societies adopt and utilize technologies as it does on the raw characteristics of the technology.” We have ample reason to believe that our potential adversaries may not share the fears and prejudices that constrain our development of EMAI.

Regimes such as Russia and China are grappling with significant demographic pressures, including shrinking working-age populations and declining birth rates. These trends threaten to weaken their military force structures over time. AI-driven systems offer a compelling solution to this problem by offsetting the diminishing human resources available for recruitment. In the face of increasingly automated warfare, these regimes can augment their military capabilities with AI systems that process vast data streams, adapt swiftly to battlefield changes, and execute missions without the need for human intervention. In this sense, demographic constraints make the pursuit of military AI not only desirable but essential for sustaining their power projection and tactical flexibility.

Moreover, totalitarian regimes face a deeper internal challenge that encourages the development of EMAI: the inherent threat posed by their own militaries. Autonomous systems offer the dual advantage of reducing dependence on human soldiers, who may one day challenge the regime’s authority, while increasing central control over military operations. In authoritarian settings, minimizing the risk of military-led dissent or coups is a strategic priority.

As advanced EMAI systems are developed and tested, the United States government needs a new generation of arms control experts to develop effective frameworks for the control of these systems.

In light of these incentives, arms control will probably fail to stop the development of these systems, but it need not fail to prevent a costly and destabilizing arms race. Arms control must be a close companion to the rise of empowered military AIs. As advanced EMAI systems are developed and tested, the U.S. government needs a new generation of arms control experts to develop effective frameworks for the control of these systems. Given the radical nature of advanced AI, these must be unlike any prior arms control approaches. The U.S. government should work with both allies and potential adversaries to build institutional structures, processes, and technological countermeasures to protect all humanity from the worst possible futures of advanced military AI, namely, those in which EMAI systems with no safety controls have widely proliferated. But to lead such a process, we need to operate from a position of strength. There is no seat at the arms control table without arms.

From a geopolitical perspective, simple game theory further suggests that Russia and China will feel compelled to develop empowered military AI — fearing a strategic disadvantage if the United States gains a technological lead in this domain. While these regimes may share some Western concerns about delegating lethal authority to AI, the practical necessity of maintaining a competitive edge in an evolving security environment will probably override these reservations, pushing them to aggressively pursue these capabilities.

We underestimate AI at our own peril. It is natural to forget what life was like before technology arrived to make it easier. Thus, we readily overlook the capabilities of current AI systems and focus on their limitations, forgetting how wondrous these tools are compared to what we had just a decade ago. If technology simply continues advancing at its current rate of acceleration, as futurist Ray Kurzweil suggests, AI capabilities will soon reach extraordinary levels. OpenAI’s accelerated ChatGPT rollout shows this pace is staggering, leading OpenAI’s CEO Sam Altman to acknowledge that we may soon co-exist with a fundamentally different type of intelligence. This presents profound, even frightening, possibilities. Buckle your seatbelts.

Downplaying the potential of AI is not simply a distraction. It actively hampers research and development that could ensure a safer future. A proper approach should assume that AI surpassing human decision-making capabilities in warfare is not only possible, it is inevitable–and coming sooner than expected. We must reject unproven assumptions of human superiority and base our military AI development on an inductive, test-centric approach focused on meeting and beating performance benchmarks. In short, EMAI systems will have the chance to demonstrate that they outperform human experts in testing environments simulating high-uncertainty, complex battle scenarios. Rigorous, fair testing–not speculation–is the right way to assess AI’s true warfare potential, even where performance standards initially seem unattainable.

What if the skeptics are right? What if EMAI cannot consistently beat human experts? To reach that conclusion, we must rely on repeated failures in good-faith research and development. The performance standard for military AI must be clear and challenging: be better than the best. Yet we need a developmental approach that does not predestine failure. For AI to be better than the best, a lot of things need to go well. Each stage of an AI’s process–sense, process, act–and each part of an AI’s “constellation of technologies” represents a potential failure point. More precisely, we would reach the “A.I. can’t be better than the best” conclusion after some amount of time (decades, probably) and money (a huge amount that will be hundreds of billions, possibly trillions of dollars). At that point, having failed to produce systems that work, we would conclude that additional research and development expenditures are not worth the risk of continued failure. But we have not yet made this effort. We have, in fact, barely begun. The current trajectory of technological change gives us little confidence in predictions that empowered military AI systems will inevitably fail.

Unappealing as it may be, the United States needs to be a leader in developing empowered military AI. Paradoxically, we may need to enable the robot apocalypse if we want to avert it.

Check Also

Gaza War: Banning UNRWA and the Challenges of Global Governance

The ban on UNRWA imposes humanitarian challenges on its beneficiaries in the West Bank and …