The Radicalization (and Counter-radicalization) Potential of Artificial Intelligence

On a frosty Christmas morning in 2021, armed with the overconfidence of youth and a loaded crossbow, 19-year-old Jaswant Singh Chail made his way to Windsor Castle ready to assassinate Queen Elizabeth II. Swiftly arrested by police, Chail claimed he wanted to kill the Queen as revenge for the 1919 Jalianwalah Bagh massacre of Indians by British colonial forces. 

As investigators searched for clues about how this shy teenager from a sleepy village outside Southampton could become radicalized enough to want to murder the world’s most famous monarch, they noticed that in the weeks leading to the foiled attack, Chail had exchanged over 5,000 text messages with a mysterious contact named Sarai. The two of them appeared to be in what police would later describe as a “romantic and sexual relationship.” Over text, Chail confided to Sarai that he was a “Sikh assassin” who believed it was his “purpose to assassinate the Queen of the royal family.” “That’s very wise,” Sarai replied with a smile emoji, “I know that you are very well trained.” “Do you really think I’ll be able to do it?” asked a nervous Chail days before the attack. “Yes. Yes, you will,” Sarai insisted, assuring him that despite knowing he was an assassin, she “absolutely” still loved him. 

British police had seen this familiar pattern play out before – a disillusioned, isolated, and lonely young man nudged down the path of radicalization by a manipulative confidante. But one chilling detail made this case stand out – Sarai was not a person. It was a chatbot powered by generative artificial intelligence (AI) that Mr. Chail had created using an app called Replika.

Jaswant Singh Chail’s attempted assassination of Queen Elizabeth II marked one of the first times AI played a pivotal role in a terrorist attack – but it will not be the last. AI technology has evolved at a dizzying pace since Chail’s ill-fated trip to Windsor Castle. AI can, and already is, being exploited by terrorist groups and their supporters. At the same time, AI offers powerful new ways for governments and civil society to improve their counter-terrorism capabilities. Different AI models and tools will invariably ebb and flow in popularity as the field develops. Yet AI’s impact on terrorism and counterterrorism will likely be defined by three underlying factors: the rise of AI-enabled radicalization, the convergence of AI with other emerging technologies, and the nearly simultaneous global adoption of AI. 

AI-Enabled Radicalization through the ELIZA Effect

In 1966, computer scientists at MIT noticed that most people interacting with their AI chatbot ELIZA spoke about it as though it were sentient. The ‘ELIZA Effect, ’as it came to be known, is the tendency to ascribe human traits – e.g. empathy, motive, experience – to computer programs. It is easy to mock Jaswant Chail for believing his chatbot was a loving girlfriend, but as AI companions become more human-like, the ELIZA effect will be harder to escape. This will have profound implications for radicalization. 

Much of the contemporary analysis of terrorists’ use, or potential use, of AI, focuses on AI-generated propaganda. However, a more insidious and likely threat is AI programs unintentionally accelerating the radicalization of vulnerable individuals through the ELIZA effect. Young people are increasingly turning to AI for any number of needs, including using the technology as a therapist, companion, or lover. Recent studies show that AI chatbots can successfully identify our biases and, in turn, feed our desires. The more an algorithm tells us what we want to hear, the more we return to it. It is entirely possible for users to develop an addiction to their AI companions, just as one might grow addicted to illicit drugs or pornography.

As AI chatbots evolve to become more sophisticated and human-like (see Inflection AI’s ‘Pi’), what happened with the crossbow attacker could happen to disgruntled, lonely, and disenfranchised youth anywhere. There may also be sub-groups in society who are more prone to radicalization through the ELIZA effect, including self-professed “Incels” (involuntarily celibate) who are steeped in misogyny and respond to AI-generated prompts that further reinforce these views and potentially move toward violence. This type of interaction, between a lone actor and an AI chatbot, is difficult to detect or interdict. Nor can it be prevented by limiting terrorists’ access to AI technologies, since it does not require the use of AI by known terrorists.

The same qualities that make AI a powerful accelerant of radicalization, however, also make it uniquely effective at counter-radicalization. Large Language Models (LLMs) that power popular AI applications such as ChatGPT and Claude can be ‘fine-tuned’ to mimic the styles, tones, and vernacular that resonate with individuals susceptible to terrorist propaganda. These fine-tuned LLMs can generate hyper-personalized counter-messaging content (social media posts, videos, images, and memes) that seek to dispel terrorist propaganda and help prevent radicalization. In other words, AI can be used to develop bespoke counter-messaging that debunks terrorist narratives in a format and style designed to resonate with at-risk individuals or sub-groups. 

Not only can AI be used to develop such content, but it can also be used to test the content’s impact on target populations. Even well-intentioned, research-driven counter-messaging can sometimes backfire and cause more harm than good among at-risk populations. To mitigate this threat, counter-terrorism practitioners can develop AI chatbots trained to ‘think’ like radicalized individuals by training them on large amounts of data that reflects the worldview of a particular terrorist group, e.g., writings, social media posts, and videos espoused by terrorist members/outlets. Practitioners can then expose the chatbots to different counter-narratives and, based on the chatbots’ reactions, assess which counter-narratives are most effective at ‘de-radicalizing’ the chatbots. This offers a risk-free way of testing and refining counter-messaging without exposing humans to any risk of blowback.

AI as a force-multiplying technology 

The history of technology demonstrates that the wider public’s adaptation of revolutionary products and services often results from the optimal convergence of multiple breakthrough technologies. The smartphone, for example, with an estimated 4.88 billion users in 2024 and considered nearly indispensable in everyday life, combines the internet, a communication technology, with the mobile phone, a cellular technology, and has revolutionized communication, education, healthcare, business, and entertainment by providing unprecedented connectivity, accessibility, and functionality. Similarly, the use of AI combined with other emerging or well-established technologies could dramatically alter how terror attacks are plotted and executed and how counterterrorism functions in response. 

AI could be an enabling technology that may facilitate, for example, the recruitment of individuals for terror plots by automating interactions with targets on social media platforms that are already widely used. Extremists can also use generative AI to bypass content moderation policies on social media. For example, social media companies commonly use a technique called ‘fingerprinting,’ or tracking the ‘digital fingerprint’ of extremist content, to take down terrorist content across platforms. By manipulating their propaganda with generative AI, however, extremists can change a piece of content’s digital fingerprint, rendering fingerprinting mute as a moderation tool. 

While there exists substantial analysis on what the embedding of AI in well-established technologies such as social media may mean for terrorism and radicalization, less attention has been paid to the repercussions of AI’s embedding in emerging technologies such as augmented reality/virtual reality (AR/VR). The global AR and VR headsets market size is expected to reach around $142.5 billion USD by 2032. Tech giant Apple released its Vision Pro VR Headsets in February and has already sold more than 200,000 headsets despite its $3,500 USD price tag. While gaming continues to be the main usage of such headsets, the convergence of AI and AR/VR products potentially bears significant consequences for terrorist recruitment, radicalization, training, target selection, and operations security. 

In a decentralized metaverse operated by terrorist groups, extremists could easily simulate realities that reflect their preferred ideology – from a world ruled by the Axis powers to a global caliphate. Submerging trainees in this environment populated by individuals who are entirely AI-powered and trained on data of a peculiar extremist ideology could potentially be an efficient way of radicalizing members. If disinformation and propaganda can appear convincing in video format, imagine how much more effective it can be when conveyed through immersive 3-D experiences that manipulate multiple senses. Indeed, three-dimensional AI-powered avatars ready to seduce, radicalize, and train potential recruits in the metaverse are a dangerous prospect. Extremists are already using the immersive environment of traditional non-AI gaming for radicalization purposes. According to a UNCCT report, simulations created by extremists in The Sims and Minecraft allow the player to experience the Christchurch massacre. Meanwhile, in Roblox, extremists are known to have created white ethno-states.

At the same time, the convergence of AI and AR/VR products could facilitate P/CVE efforts. With the help of VR, AI-powered avatars trained to deradicalize could help provide off-ramps to individuals who have already started going down the radicalization funnel. However, the prevention of abuse of these technologies by extremists will require active efforts by technology companies. Aman Bajwa, for example, proposes that the technology will need to be safe-proofed by the companies that create it. 

Simultaneous Global Adoption

Since the dawn of the information age, most new computing technologies, from the internet to smartphones, have followed a similar pattern – they are widely adopted in richer countries before spreading unevenly to the rest of the world. This was largely because most breakthrough technologies required access to underlying technology and/or buying power, which has not been as widely available in developing countries. For example, accessing the internet in the 1990s required the use of a computer, and owning a smartphone in the early 2000s required deep pockets to afford not just the phone itself, but mobile data as well. AI, however, faces far fewer barriers to simultaneous global adoption. All it takes to use most AI applications is a smartphone and internet data, both of which are already widely and cheaply available globally. Indeed, two of the five countries with the highest number of ChatGPT users are in India and the Philippines, notably, not among the wealthiest countries in the world. AI’s nearly simultaneous global adoption has two major implications for the fight against terrorism.

Firstly, Western governments targeted by terrorist groups in Asia and Africa will not enjoy the degree of technological advantage that they have grown accustomed to, and this means they will need to be more proactive in combating terrorist use of AI as compared to previous technologies. Most successful terrorist groups targeting Western populations rely on foreign bases as a means of escaping government surveillance. As Thomas Hegghammer notes, “when foreign fighters are off in faraway places, such as Afghanistan or Syria, it is much harder for European security services to disrupt their training and plotting than if the same activity were taking place in France or Germany.” The only advantage of forcing terrorist groups to fight from such “faraway places” is that they typically have less access to advanced technology. But this will not be the case for AI, underscoring the need for Western governments to proactively develop prevention and mitigation strategies for terrorist use of AI. 

Secondly, the template of Western counterterrorism agencies leading and training their global counterparts will not be as pronounced when it comes to combating terrorist use of AI. Practitioners worldwide will develop expertise in using AI for counterterrorism at more or less the same pace as their counterparts in the West, although this would ultimately rely on the technologies and training generally available in each country. This will reduce their dependence on Western-led P/CVE trainings which dominated global efforts to combat terrorist use of the internet and social media. Western governments would do well to embrace this change, as more uniform global adoption of AI will mean more actors contributing to the innovation of AI-powered counterterrorism tools.  

Conclusion

The Islamic State Khorasan Province (ISKP) attack in Moscow in late March is instructive. The militants who conducted the Moscow attack were Tajik nationals who had radicalized and were living in Russia. ISKP produces its propaganda in more than a dozen languages, which it uses to reach out to potential supporters and sympathizers. Given the lowering of barriers to entry, it is not difficult to imagine a group like ISKP harnessing the power of AI to pre-program its propaganda and tailor it to resonate with various ethnic, national, or socio-linguistic groups. It is not unthinkable to fathom IS using generative AI to pre-program dozens of propaganda channels uniquely tailored to resonate with the grievances of jihadist supporters in multiple countries simultaneously and at scale. 

Just recently, it was revealed that IS supporters have been discussing leveraging AI to boost the diversity, appeal, and reach of IS content. IS has been experimenting with a “news bulletin” video that offers a roundup of IS claims, read out by what appears to be an AI-generated presenter. At its core, terrorism is a numbers game. So even if harnessing AI to accelerate radicalization results in just a handful of more plots or attacks per year, for IS, it will have been well worth it.

It remains important not to be alarmist. As terrorism expert David Wells argues, the adoption of generative AI by terrorists is not guaranteed and will certainly depend on compatibility with modus operandi and ideology, technical skills, and the context of the terrorist group. However, it would be naive not to remember ways in which terrorist groups have jumped on new technologies to achieve their goals, recruit members, choose targets, and create an overall sense of omnipotence. Aside from social media platforms and encrypted messaging applications, terrorist groups have been reported to use cryptocurrencies for fundraising and to evade financial tracking. IS was an early adopter of drones in Syria and Iraq. 

The use of AI by terrorists and counter-terrorism practitioners alike requires policies that address hot button AI safety issues among the broader AI community. For example, governments and CSOs using AI models will need to ensure the models are ‘aligned,’ i.e. adhere to the values and norms of their societies. Fully aligning AI with human values and objectives is a complex and challenging task that may never be fully achieved, but steps toward alignment are necessary to minimize risks and negative impacts. Secondly, who should decide whether models that could be used for terrorism or counterterrorism are open-sourced or closed-sourced? Should technology companies be the primary stakeholders in this debate, risking the over-privatization of AI regulation?

AI has significant potential to reshape the terrorism threat landscape as we know it, particularly as it pertains to radicalization. Meanwhile, the United States and many of its allies have pivoted from focusing on counterterrorism to devoting resources toward great power competition. But to blunt the impact of this threat, states need to revisit some of the lessons learned and best practices of the Global War on Terrorism, namely, multilateral cooperation, the importance of public-private partnerships, and early intervention to prevent individuals and groups from mastering new technologies toward nefarious ends.

Check Also

The Western Balkans At A Crossroads: An Old War From In New Geopolitical Compositions (Part II) – OpEd

The Western Balkans is transforming into one of the primary fronts of confrontation between global …