The appearance of Turkish artificial intelligence-controlled drones in Libyan skies has rekindled questions on how lethal autonomous weapons will affect regional geopolitics and whether they should be banned.
Turkey’s flourishing drone industry is back in the international spotlight following a UN report suggesting that Turkish-made artificial intelligence-based drones might have been used to kill enemy troops in Libya last year. If confirmed the incident would mark the debut of “killer robots” in the global theater of war.
The report by the UN Panel of Experts on Libya indicates that a Kargu-2 kamikaze drone manufactured by Turkey’s state-owned company STM was likely used in March 2020 in clashes between the forces of the Turkish-backed Government of National Accord and the Libyan National Army of eastern warlord Khalifa Hifter following the latter’s besiegement of Tripoli. Logistics convoys and retreating Hifter forces “were hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 and other loitering munitions,” the report says. “The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” it noted without specifying whether anyone was actually killed.
Turkish military sources familiar with the matter confirmed that Kargu-2s had been used in Libya on multiple occasions, but denied that the drones — which have both autonomous and manual operation modes — were allowed to use artificial intelligence to select and hit targets. The drones operated autonomously only to reach target areas, after which operators on the ground made the decisions to strike, the sources told Al-Monitor on condition of anonymity.
STM describes Kargu-2 as a loitering rotary-wing attack drone with real-time image processing capabilities and embedded machine learning algorithms that are also equipped with swarming capabilities that allow up to 20 drones to work together. In its autonomous mode, Kargu-2 can be programmed to attack targets without data connectivity between the ground unit/operator and the munition.
The UN report was met with nationalist euphoria in Turkey’s pro-government media, which lauded Kargu-2 as further proof of how far the domestic defense industry has progressed under President Recep Tayyip Erdogan with drone sales to Azerbaijan, Qatar and Ukraine. Foreign observers, meanwhile, focused on the global ramifications of the events in Libya and the advance of drones in theaters of war across the region.
In a June 3 article headlined “Armed Low-Cost Drones, Made by Turkey, Reshape Battlefields and Geopolitics,” The Wall Street Journal reported, “Smaller militaries around the world are deploying inexpensive missile-equipped drones against armored enemies, a new battlefield tactic that proved successful last year in regional conflicts, shifting the strategic balance around Turkey and Russia. Drones built in Turkey with affordable digital technology wrecked tanks and other armored vehicles, as well as air-defense systems, of Russian protégés in battles waged in Syria, Libya and Azerbaijan.”
According to defense analysts at the Istanbul-based Center for Economics and Foreign Policy Studies, Turkey’s political and military decision-makers see unmanned military systems and robotic warfare as not mere military modernization but “an opportunity to pioneer [the country’s] next geopolitical breakthrough.”
Along with Kargu-2, the Alpagu fixed-wing loitering munition system and the Togan autonomous multi-rotor reconnaissance drone — both also developed by STM — stand out as examples of advanced autonomous capabilities in the Turkish defense industry. According to the company, all three unmanned aerial vehicles use computer imaging for targeting and are programmed with machine learning algorithms to optimize target classification, tracking and attack capabilities without the need for a GPS connection.
While such technologies sound like a revolutionary step in warfare, a global debate has been simmering since the early 2000s on whether lethal autonomous weapon systems should be regulated or banned, given ethical concerns over their ability to select and hit targets without human intervention. The release of the UN report on Libya has rekindled the debate, which had been largely hypothetical thus far.
Another aspect of the debate is how and to what extent autonomous systems will change the character of war. The US journal Popular Mechanics draws a comparison to how the atomic bomb served as a divider between eras, saying the events in Libya “may similarly divide the time when humans had full control of weapons, and a time when machines made their own decisions to kill.”
Many argue that lethal autonomous systems should be banned under an international treaty similar to the 2017 UN Treaty on the Prohibition of Nuclear Weapons, designed as a legally binding instrument aimed at fully eradicating nuclear weapons.
Another argument has it that lethal autonomous systems fall into the scope of indiscriminate weapons and should be banned under an existing UN convention prohibiting or restricting conventional weapons deemed to be excessively injurious or to have indiscriminate effects.
In 2019, UN Secretary-General Antonio Guterres urged states to take action to ban autonomous weapons systems. “Machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law,” he said.
The International Committee of the Red Cross urged governments in mid-May to prohibit the use of autonomous weapons for targeting human beings and impose strict restrictions on other uses. Human Rights Watch, meanwhile, has called for “a new process … to negotiate an international treaty on killer robots,” highlighting the UN report on the incident involving the Kargu-2 in Libya and Azerbaijan’s alleged use of Israeli-made Harop loitering munitions in the Nagorno-Karabakh conflict last year.
STM has kept mum in the face of international reactions to the UN report. The company is likely pleased with the publicity in terms of marketing and future exports, but it needs to consider not only the technological aspect of the matter but also its implications in terms of international law, diplomacy and ethical concerns.
Back in 2013, I penned an article — “Drone warfare and contemporary strategy making: Does the tail wag the dog?” — that questioned whether new military technologies are undermining the subjugation of military technique to military strategy and thus policy. The question remains pertinent today in the context of lethal autonomous weapons. Such weapons are likely to change the nature of war, including political and ethical views of war. In the face of one-way determinist views of the impact of autonomous systems, I am inclined to believe that the interaction of autonomous systems with military strategy, ethics, culture and politics has barely begun.