Bottom Line Up Front
Disinformation campaigns can have wide ranging impacts, from altering elections to encouraging acts of violence.
Both thepublic and private sectors, including the U.S. federal government and social media companies, have adopted inchoate strategies to combating disinformation, which have led to piecemeal responses.
With COVID-19 and the accompanying ‘infodemic,’ it is critical to consider a comprehensive response, including adopting new laws, deploying sanctions, and expanding organizational resources at scale to deal with the disinformation challenge.
Government solutions do not offer a panacea for the disinformation challenge and, as such, it remains critical that citizens build their own digital and media literacy skills.
The spread of disinformation represents a threat to international peace and stability, from major election disruptions to motivating people to commit violence. Russia’s sophisticated campaign to influence the 2016 Presidential election raised the threat profile of disinformation significantly. Several examples demonstrate that conspiracy theories pushed online can lead to consequences in the real world. One recent instance was in Los Angeles, where a train engineer, Eduardo Moreno, intentionally derailed his train to disrupt U.S. government efforts to fight COVID-19. Moreno said he believed that the U.S. Navy hospital ship, the Mercy, was part of a government scheme to take over the country. Beyond deploying conspiracy theories, deepfakes (AI-altered videos) have been used to manipulate individuals to influence their beliefs. Social media companies – the prime battleground for disinformation wars – have adopted an uneven approach to combating disinformation, including deepfakes. Twitter has proactively banned all political ads, but decided to only flag (not remove) political deepfakes. In contrast, Facebook announced it would not remove political ads known to be false. These issues have become more acute with the COVID-19 pandemic, which the World Health Organization now calls an ‘infodemic.’
Governments, especially the United States, have been slow to formulate a comprehensive policy response to disinformation. In fact, Congress did not hold a hearing on deepfakes until the summer of 2019, but deepfakes were proliferating and recognized as a significant challenge well before that time. The federal government, via Congress, finally took steps to better comprehend the scope of the Deepfake threat when it passed National Defense Authorization Act (NDAA). Of note, the NDAA requires the Director of National Intelligence to report on foreign weaponization of deepfakes and requires that Congress be notified by the Executive branch of any deepfake-disinformation efforts targeting U.S. elections. The NDAA also established a ‘deepfakes Prize’ competition to encourage technological solutions to better identify deepfakes. Prior to the NDAA announcement, the Defense Advanced Research Projects Agency (DARPA) had already allocated nearly $70 million toward deepfake identification. In 2019, Facebook, Microsoft, and the Partnership on AI announced a cash prize of $10 million to encourage better deepfake detection. Congress can also do more with respect to social media companies. Platforms like Twitter and Facebook often hide behind the legal shield of Section 230 of the Communications Decent Act (CDA), but it is time for Congress to reexamine whether social media entities should be held to account for allowing malicious disinformation campaigns to metastasize over their forums.
Without sufficient laws and accompanying strategies in place, the dissemination of disinformation, including the use of deepfakes, will continue unabated despite additional resources. The December 2017 U.S. National Security Strategy mentions disinformation just twice, while the European Union developed a strategy, codes of practice, and a comprehensive action plan to combat disinformation back in 2015. The next U.S. National Security Strategy should emphasize that disinformation is a significant threat to the United States. At least two states, Texas and California, have passed laws banning political deepfake videos. While both laws are imperfect, the criminalization of malicious deepfakes is a positive step – one which the federal government should consider. The U.S. government should also consider aggressively using sanctions against foreign-based individuals and entities waging disinformation campaigns. Congress should consider adopting a ‘Sanctions Disinformation Act’ aimed at providing the U.S. Departments of State and Treasury the legal authorities to sanction individuals and organizations engaged in malicious disinformation efforts.
The United States should also consider expanding resources for its counter-disinformation activities. While the State Department’s Global Engagement Center (GEC), charged with countering disinformation globally, has an expanded proposed budget of $138 million this year, other public diplomacy programs useful for combatting disinformation are being cut drastically, including cuts to public diplomacy programs at a whopping $184.5 million. The Trump administration has proposed deep cuts for the Voice of America (VOA) and Radio Free Europe, which are critical to explaining U.S. policy in a fact-based way. U.S. government efforts on the whole are not commensurate to the disinformation threat posed by Russia, China, and rogue transnational actors, including violent white supremacists. Congress should significantly increase the GEC’s budget and ensure that organizations like the VOA can continue to effectively provide objective U.S. news to the world, not either/or. Even the most robust updates to federal laws, technological advancements, organizational enhancements, and private sector policy improvements will not immunize the public from disinformation. Citizens must increase their digital and media literacy to better ward off disinformation efforts designed to create fear, uncertainty, and confusion.