America Isn’t Ready for the Wars of the Future

And They’re Already Here

On the battlefields of Ukraine, the future of war is quickly becoming its present. Thousands of drones fill the skies. These drones and their operators are using artificial intelligence systems to avoid obstacles and identify potential targets. AI models are also helping Ukraine predict where to strike. Thanks to these systems, Ukrainian soldiers are taking out tanks and downing planes with devastating effectiveness. Russian units find themselves under constant observation, and their communications lines are prone to enemy disruption—as are Ukraine’s. Both states are racing to develop even more advanced technologies that can counter relentless attacks and overcome their adversary’s defenses.

The war in Ukraine is hardly the only conflict in which new technology is transforming the nature of warfare. In Myanmar and Sudan, insurgents and the government are both using unmanned vehicles and algorithms as they fight. In 2020, an autonomous Turkish-made drone fielded by Libyan government-backed troops struck retreating combatants—perhaps the first drone attack conducted without human input. In the same year, Azerbaijan’s military used Turkish- and Israeli-made drones, along with loitering munitions (explosives designed to hover over a target), to seize the disputed enclave of Nagorno-Karabakh. And in Gaza, Israel has fielded thousands of drones connected to AI algorithms, helping Israeli troops navigate the territory’s urban canyons.

In a sense, there is nothing surprising about the pace of such developments. War has always spurred innovation. But today’s shifts are unusually rapid, and they will have a far greater effect. Future wars will no longer be about who can mass the most people or field the best jets, ships, and tanks. Instead, they will be dominated by increasingly autonomous weapons systems and powerful algorithms.

Unfortunately, this is a future for which the United States remains unprepared. Its troops are not fully ready to fight in an environment in which they rarely enjoy the element of surprise. Its jets, ships, and tanks are not equipped to defend against an onslaught of drones. The military has not yet embraced artificial intelligence. The Pentagon does not have nearly enough initiatives aimed at rectifying these failures—and its current efforts are moving too slowly. Meanwhile, the Russian military has fielded many AI-powered drones in Ukraine. And in April, China announced its largest military restructuring in almost a decade, with a new emphasis on building up technology-driven forces.

If it wants to remain the preeminent global power, the United States will have to quickly shift course. The country needs to reform the structure of its armed forces. The U.S. military needs to reform its tactics and leadership development. It needs new ways to procure equipment. It needs to buy new types of gear. And it needs to better train soldiers to operate drones and use AI.

American policymakers, accustomed to governing the world’s most powerful defense apparatus, may not like the idea of such a systemic overhaul. But robots and AI are here to stay. If the United States fails to lead this revolution, malevolent actors equipped with new technologies will become more willing to attempt attacks on the United States.

When they do, they might succeed. Even if Washington prevails, it will find itself increasingly surrounded by military systems designed to support autocracies and deployed with little respect for liberal values. The United States must therefore transform its armed forces so it can maintain a decisive military advantage—and ensure that robots and AI are used in an ethical manner.

CHANGE OR PERISH
The nature of war is, arguably, immutable. In almost any armed conflict, one side seeks to impose its political will on another through organized violence. Battles are fought with imperfect information. Militaries must contend with constantly fluctuating dynamics, including within their ranks, between them and their governments, and between them and ordinary people. Troops experience fear, bloodshed, and death. These realities are unlikely to change even with the introduction of robots.

But the character of war—how armies fight, where and when the fighting occurs, and with what weapons and leadership techniques—can evolve. It can change in response to politics, demographics, and economics. Yet few forces bring more change than technological development. The invention of saddles and horseshoes, for example, helped enable the creation of cavalry in the ninth century BC, which extended the battlefield beyond the flat expanses required for chariots and into new types of terrain. The introduction of the long bow, which could fire arrows over great distances, enabled defenders to pierce heavy armor and decimate advancing armies from afar. The invention of gunpowder in the ninth century AD led to the use of explosives and firearms; in response, defenders built stronger fortifications and placed a greater emphasis on producing weapons. The effect of technology grew more pronounced with the Industrial Revolution, which led to the creation of machine guns, steamships, and radios. Eventually, it also led to motorized and armored vehicles, airplanes, and missiles.

The performance of militaries often depends on how well they adapt to and adopt technological innovations. During the American Revolution, for example, the Continental Army fired muskets at the British in massed volleys and then charged forward with fixed bayonets. This tactic was successful because Continental forces were able to cross the distances between opposing lines before the British reloaded. But by the Civil War, muskets had been replaced by rifled barrels, which took much less time to reload and were more accurate. As a result, defending armies were able to decimate advancing infantry. Generals on both sides adjusted their tactics—for example, by using snipers and defensive fortifications such as trenches. Their decisions paved the way for the trench warfare of World War I.

Traditional defense firms won’t design the next generation of small, cheap drones.
Technological adaptation also proved essential to World War II. In the lead-up to that conflict, all advanced countries had access to the then new technologies of motorized vehicles, armored tanks, aircraft, and the radio. But the German army was a trailblazer when it came to bringing these components together. Their new warfighting doctrine, commonly called blitzkrieg (“lightning war”), involved air bombings that disrupted communications and supply lines, followed by armored vehicle and infantry assaults that broke through Allied lines and then traveled far past them. As a result, the Germans were able to overrun almost all of Europe in 18 months. They were stopped in Stalingrad, but only by a Soviet military that was willing to take enormous casualties.

To respond, the Allies had to develop similar tactics and formations. They had to illustrate what one of us (Schmidt) termed “innovation power”: the ability to invent, adapt, and adopt new technologies faster than competitors. They eventually succeeded at mechanizing their own forces, developing better ways of communicating, using massive amounts of airpower, and, in the case of the Americans, building and employing the world’s first nuclear bombs. They were then able to defeat the Axis in multiple theaters at once.

The Allies’ effort was incredible. And yet they still came close to defeat. If Germany had more efficiently managed its industrial capacity, made better strategic choices, or beaten the United States to an atomic weapon, Berlin’s initial innovation edge could well have proved decisive. The outcome of World War II may now seem preordained. But as the Duke of Wellington reportedly said of the outcome at Waterloo over a century earlier, it was a close-run thing.

ALL SYSTEMS GO
It has often been difficult for military planners to predict which innovations will shape future battles. But forecasts are easier to make today. Drones are omnipresent, and robots are increasingly in use. The wars in Gaza and Ukraine have shown that artificial intelligence is already changing the way states fight. The next major conflict will likely see the wholesale integration of AI into every aspect of military planning and execution. AI systems could, for instance, simulate different tactical and operational approaches thousands of times, drastically shortening the period between preparation and execution. The Chinese military has already created an AI commander that has supreme authority in large-scale virtual war games. Although Beijing prohibits AI systems from making choices in live situations, it could take the lessons it learns from its many virtual simulations and feed them to human decision-makers. And China may eventually give AI models the authority to make choices, as might other states. Soldiers could sip coffee in their offices, monitoring screens far from the battlefield, as an AI system manages all kinds of robotic war machines. Ukraine has already sought to hand over as many dangerous frontline tasks as it can to robots to preserve scarce manpower.

So far, automation has focused on naval power and airpower in the form of sea and air drones. But it will turn to land warfare soon. In the future, the first phase of any war will likely be led by ground robots capable of everything from reconnaissance to direct attacks. Russia has already deployed unmanned ground vehicles that can launch antitank missiles, grenades, and drones. Ukraine has used robots for casualty evacuation and explosive disposal. The next generation of machines will be led by AI systems that use the robots’ sensors to map the battlefield and predict points of attack. Even when human soldiers eventually intervene, they will be led by first-person-view aerial drones that can help identify the enemy (as already happens in Ukraine). They will rely on machines to clear minefields, absorb the enemy’s first volleys, and expose hidden adversaries. If Russia’s war on Ukraine expands to other parts of Europe, a first wave of land-based robots and aerial drones could enable both NATO and Russia to oversee a wider frontline than humans alone can attack or defend.

The automation of war could prove essential to saving civilian lives. Historically, wars were fought and won in open terrain where few people live. But as global urbanization draws more people into cities and nonstate actors pivot to urban guerrilla tactics, the decisive battlefields of the future will likely be densely populated areas. Such fighting is far more deadly and far more resource-intensive. It will therefore require even more robotic weapons. Militaries will have to deploy small, maneuverable robots (such as robot dogs) on streets and flood the sky with unmanned aerial vehicles to take control of urban positions. They will be guided by algorithms, which can process visual data and make split-second decisions. Israel has helped pioneer such technology, using the first true drone swarm in Gaza in 2021. Those individual drones bypassed Hamas’s defenses and communicated through an AI weapons system to make collective decisions about where they should go.

The use of unmanned weapons is essential for another reason: they are cheap. Drones are a much more affordable class of weapons than are traditional military jets. An MQ-9 Reaper drone, for example, costs roughly a fourth as much as an F-35 fighter jet. And the MQ-9 is one of the most expensive such weapons; a simple first-person-view drone can cost just $500. A team of ten of them can immobilize a $10 million Russian tank in Ukraine. (Over the past few months, more than two-thirds of the Russian tanks that Ukraine has taken out were destroyed by such drones.) This affordability could allow states to send swarms of drones—some designed to surveil, others to attack—without worrying about attrition. These swarms could then overwhelm legacy air defense systems, which are not designed to simultaneously shoot down hundreds of objects. Even when defense systems prevail, the cost of defending against swarms will far surpass the cost of the attack for the enemy. Iran’s April mass drone and missile strike against Israel cost at most $100 million, but U.S. and Israeli interception efforts cost more than $2 billion.

The affordability of these weapons will, of course, make offense much easier—in turn empowering frugal, nonstate actors. In 2016, Islamic State (ISIS) terrorists used cheap drones to counter U.S.-supported advances on the Syrian city of Raqqa and the Iraqi city of Mosul, dropping grenade-sized munitions from the sky and making it hard for the Syrian Democratic Forces to set up antisniper positions. Today, Iranian-backed insurgents are using drones to strike U.S. air bases in Iraq. And the Houthis, the military group that controls much of Yemen, are sending drones to strike ships in the Red Sea. Their attacks have tripled the cost of shipping from Asia to Europe. Other groups could soon get in on the action. Hezbollah and al Qaeda in the Middle East, for example, might engage in more regional attacks, as could Boko Haram in Nigeria and al Shabab elsewhere in Africa.

Drones are helping groups beyond the Middle East and Africa, as well. A ragtag coalition of pro-democracy and ethnic militias in Myanmar is using repurposed commercial drones to fight off the military junta’s once feared air force. Now, it controls over half the country’s territory. Ukraine has similarly used drones to great effect, particularly in the war’s first year.

In the event of a Chinese amphibious assault, drones could help Taiwan, as well. Although Beijing is unlikely to launch a full attack on the island in the next few years, Chinese President Xi Jinping has ordered his country’s military to be capable of invading Taiwan by 2027. To stop such an attack, Taiwan and its allies would have to strike an enormous number of invading enemy assault craft within a very short time window. Unmanned systems—on land, sea, and air—may be the only way to do so effectively.

As a result, Taiwan’s allies will have to adapt the weapons used in Ukraine to a new type of battlefield. Unlike the Ukrainians, who have mostly fought on land and in the air, the Taiwanese will be reliant on underwater drones and autonomous sea mines that can quickly move around in battle. And their aerial drones will have to be capable of longer flight times over larger stretches of ocean. Western governments are at work developing such drones, and as soon as these new models are ready, Taiwan and its allies must manufacture them en masse.

SHAKE IT UP
No state is fully prepared for future wars. No country has begun producing the hardware it needs for robot weapons at scale, nor has any state created the software required to fully power automated weapons. But some countries are further along than others. And unfortunately, the United States’ adversaries are, in many ways, in the lead. Russia, having gained experience in Ukraine, has dramatically upped its drone production and now uses unmanned vehicles to great effect on the battlefield. China dominates the global commercial drone market: the Chinese company DJI controls an estimated 70 percent of global commercial drone production. And because of China’s authoritarian structure, the Chinese military has proved especially adroit at pushing through changes and adopting new concepts. One, termed “multidomain precision warfare,” entails the People’s Liberation Army’s use of advanced intelligence, reconnaissance, and other emerging technologies to coordinate firepower.

When it comes to AI, the United States still has the highest quality systems and spends the most on them. Yet China and Russia are swiftly gaining ground. Washington has the resources to keep outspending them, but even if it maintains this lead, it could struggle to overcome the bureaucratic and industrial obstacles to deploying its inventions on the battlefield. As a result, the U.S. military risks fighting a war in which its first-rate training and superior conventional weaponry will be rendered less than effective. U.S. troops, for example, have not been fully prepared to operate on a battlefield where their every move can be spotted and where they can be rapidly targeted by the drones hovering overhead. This inexperience would be especially dangerous on open battlefields like those in Ukraine, as well as other eastern European countries or in the wide expanses of the Arctic. The U.S. military would also be especially vulnerable in urban battlefields, where enemies can more easily sever U.S. communications lines and where many American weapons are less useful.

Even at sea, the United States would be vulnerable to its adversaries’ advances. Chinese hypersonic missiles could sink U.S. aircraft carriers before they make it out of Pearl Harbor. Beijing is already deploying AI-powered surveillance and electronic warfare systems that could give it a defensive advantage over the United States in the entire Indo-Pacific. In the air, the capable but costly F-35 might struggle against swarms of cheap drones. So might the heavily armored Abrams and Bradley tanks on the ground. Given these unfortunate facts, U.S. military planners are right to have concluded that the era of “shock and awe” campaigns—in which Washington could decimate its adversaries with overwhelming firepower—is finished.

In the worst-case scenario, AI warfare could endanger humanity.
To avoid becoming obsolete, the American military needs to make major reforms. It can start by shaking up its processes for acquiring software and weapons. Its current purchasing process is too bureaucratic, risk-averse, and slow to adapt to the rapidly developing threats of the future. For example, it relies on ten-year procurement cycles, which can lock it into particular systems and contracts long after the underlying technology has evolved. It should, instead, ink shorter deals whenever possible.

Similarly, the United States must look to purchase from a wider pool of companies than it typically uses. In 2022, Lockheed Martin, RTX, General Dynamics, Boeing, and Northrop Grumman received over 30 percent of all Defense Department contract money. New weapons manufacturers, by contrast, received hardly any. Last year, less than one percent of all Defense Department contracts went to venture-backed companies, which are generally more innovative than their larger counterparts. Those percentages should be far more equal. The next generation of small, cheap drones are unlikely to be designed by traditional defense firms, which are incentivized to produce fancy but expensive equipment. They are more likely to be created as they were in Ukraine: through a government initiative that supports dozens of small startups. (One of us, Schmidt, has been a longtime investor in defense technology companies.)

To adapt for the future, however, the United States will need to do more than simply reform the way it purchases weapons. It must also change the military’s organizational structures and training systems. It should make its complex, hierarchical chain of command more flexible and give greater autonomy to small, highly mobile units. These units should have leaders trained and empowered to make crucial combat decisions. Such units will be more nimble—a critical advantage given the fast pace of AI-powered war. They are also less likely to be paralyzed if adversaries disrupt their communications lines to headquarters. These units must be connected with new platforms, such as drones, so they can be as effective as possible. (Autonomous systems can also help improve training.) U.S. special forces are a possible template for how these units could operate.

RISKS AND REWARDS
This new age of warfare will have normative advantages. Advances in precision technology could lead to fewer indiscriminate aerial bombings and artillery attacks, and drones can spare the lives of soldiers in combat. But the rates of civilian casualties in Gaza and Ukraine cast doubt on the notion that conflicts are becoming any less deadly overall—especially as they move into urban areas. And the rise of AI warfare opens a Pandora’s box of ethical and legal issues. An autocratic state, for example, could easily take AI systems designed to collect intelligence in combat and deploy them against dissenters or political opponents. China’s DJI, for example, has been linked to human rights abuses against Chinese Uyghurs, and the Russian-linked Wagner paramilitary group has helped the Malian military conduct drone strikes against civilians. These concerns aren’t limited to U.S. adversaries. The Israeli military has used an AI program called Lavender to identify potential militants and target their homes with airstrikes in densely populated Gaza. The program has little human oversight. According to +972 Magazine, people spend just 20 seconds authorizing each attack.

In the worst-case scenario, AI warfare could even endanger humanity. War games conducted with AI models from OpenAI, Meta, and Anthropic have found that AI models tend to suddenly escalate to kinetic war, including nuclear war, compared with games conducted by humans. It doesn’t take much imagination to see how matters could go horribly wrong if these AI systems were actually used. In 1983, a Soviet missile detection system falsely classified light reflected off clouds as an incoming nuclear attack. Fortunately, the Soviet army had a human soldier in charge of processing the alert, who determined the warning was false. But in the age of AI, there might not be a human to double-check the system’s work. Thankfully, China and the United States appear to recognize that they must cooperate on AI. Following their November 2023 summit, U.S. President Joe Biden and Xi pledged to jointly discuss AI risk and safety issues, and the first round of talks took place in Geneva in May. This dialogue is essential. Even if cooperation between the two superpowers starts small, perhaps achieving nothing more than establishing shared language regarding the use of AI in war, it could lay the foundations for something greater. During the Cold War—an era of great-power rivalry significantly more intense than the current U.S.-Chinese competition—the Soviet Union and the United States were able to build a strong regime of nuclear safety measures. And like the Soviets, Chinese officials have incentives to cooperate with Washington on controlling new weapons. The United States and China have different global visions, but neither of them wants terrorists to gain possession of dangerous robots. They may also want to stop other states from acquiring such technology. Great powers that possess formidable military technology almost always have an overlapping interest in keeping it to themselves.

Even if China won’t cooperate, the United States should ensure that its own military AI is subject to strict controls. It should make sure AI systems can distinguish between military and civilian targets. It must keep them under human command. It should continuously test and assess systems to confirm that they operate as intended in real-world conditions. And the United States should pressure other countries—allies and adversaries alike—to adopt similar procedures. If other states refuse, Washington and its partners should use economic restrictions to limit their access to military AI. The next generation of autonomous weapons must be built in accordance with liberal values and a universal respect for human rights—and that requires aggressive U.S. leadership.

War is nasty, brutish, and often much too long. It is an illusion to think that technology will change the underlying human nature of conflict. But the character of war is changing both rapidly and fundamentally. The United States must change and adapt, as well, and American officials must do so faster than their country’s adversaries. Washington won’t get it exactly right—but it must get it less wrong than its enemies.

Check Also

Hopes and Uncertainties in Syria

Many Western leaders have expressed their relief at the collapse of the dictatorship of Syria’s …