Towards an Anthropology of Surveillance

With the rapid growth of metadata and political and corporate surveillance in America during the last two decades, anthropologists Roberto J. González and David H. Price—long-time contributors to CounterPunch—have been studying the impacts and implications of these developments. Both Price and González recently published books that critically examine surveillance in the United States (Price’s The American Surveillance State: How the U.S. Spies on Dissent and González’s War Virtually: The Quest to Automate Conflict, Militarize Data, and Predict the Future). Below are excerpts from an extended conversation between the two on the cultural, military, and political dimensions of surveillance, technology, culture, and power.

+++

David H. Price: The last two decades you’ve produced a wide body of anthropological work examining cultural knowledge systems—ranging from your work studying Zapotec Science to this latest book War Virtually, which critically considers how metadata is militarized in ways that most of us have little awareness about. Before getting into the specifics of this new book, could you say something about how it connects to your previous work?

Roberto J. González: On the surface, my work probably seems disjointed—maybe bifurcated is a better word. My early research was a late 1990s village study about the shifting ecological knowledge and practices of campesinos in the Oaxacan mountains, who were bearing the brunt of Mexico’s integration into the global capitalist economy.

Around 2006, I started exploring questions related to militarization and culture. Part of this was driven by the fact that social scientists were getting sucked into the US wars in Iraq and Afghanistan to do counterinsurgency work. My new book builds on this by looking at how social scientists are being recruited for a different kind of mission, centered around high-tech warfare based on data analytics.

Much of what I’ve done deals with the question: How do powerful institutions—states and corporations—shape the work that ordinary scientists and technical experts carry out? And, what are the possibilities for radically different alternatives—for example, democratic and locally-based scientific and technological systems?

DHP: Have you been back to Oaxaca lately?

RJG: I’ve gone back several times over the years. Spending time in Oaxaca is always invigorating—and it keeps me grounded. What I mean is, it helps me keep a cross-cultural perspective. My graduate advisor Laura Nader once told me, “Always keep more than one arrow in your quiver.” That was great advice. Incidentally, she’s the reason I wound up in Oaxaca—she did her dissertation work there in the late 1950s and 60s.

Now, let me ask about your intellectual history. Tell me how you went from studying archaeology and Egyptian irrigation systems to uncovering the hidden history of US intelligence agencies. And how does your new book, The American Surveillance State, relate to your previous projects?

DHP: That’s the thing about life—a lot of the twists and turns that only make sense after the fact. You can’t tell where you’re going while it’s all happening, or how you’re going to later use skills you pick up doing other things. My dissertation research was sort of classic late-80s and early-90s cultural ecology work, looking at how water loss impacts people living upstream and downstream from each other, and how in the Egypt the state is omnipresent—yet far from omnipotent. I looked at how different farmers had varying levels of power to do things like initiate maintenance work, and how the state was everywhere but relatively powerless.

RJG: That sounds a lot like northern Oaxaca. Technically, the state is in control, but all the important decisions are made locally, by elected village authorities.

DHP: I guess the short answer on how I went from this ethnographic work to studying American intelligence agencies is that I’d long had a strong interest in the history of anthropology—I did my Master’s thesis with George Stocking at Chicago, writing about early US ethnoarchaeology. Like lots of people, I’d heard rumors about anthropologists occasionally having CIA or Pentagon connections, so I decided to try and apply my research skills to see what sort of records I could find. When I told colleagues that I was trying to do this, people were interested but told me it would be impossible—or I might simultaneously publish and perish.

When I first made massive FOIA [Freedom of Information Act] requests for FBI and CIA records on anthropologists and others, I didn’t know it would lead me into a couple of decades of work, or that it would become the main focus of my research. I always assumed I’d keep working in the Middle East, but when large amounts of records started coming in, I felt a responsibility to write it up. That was the motivation for my first FOIA book, Threatening Anthropology. This was long, slow work, but it was rewarding. There were all sorts of connections that grew from my dissertation work on Egypt that might not be obvious—mainly a critical examination of state power.

RJG: Talk about your connection to Marvin Harris. He was a living legend when I was in graduate school twenty-five years ago.

DHP: While in grad school working on my doctorate at the University of Florida, I had been Marvin Harris’s research assistant for four years—functioning as a sort of a pre-internet human Google. Harris would come to campus once a week with a long list of questions scrawled on sheets of yellow legal pads, and I’d attack the library or make cold calls using the WATS line [Wide Area Telephone Service] to try and answer. I developed research and detective skills that were vital in doing this later historical work. Harris was kind to me. He paid me enough money indexing one of his books that I could fund a research trip to Yemen one summer. When I met him in the mid-80s, he was an older version of the once fierce debater, who never demanded any sort of dogmatic intellectual loyalty on points of theory.

Among other things, I’d left Chicago because I hated the anti-political action, inward-drowning postmodernism of the mid-1980s that was flourishing there at the time. While I had some differences of interpretation with Harris, he certainly wasn’t bogged down in that kind of nonsense, and was fine with my disagreements with him. I learned lots from him, including how to write clearly. Obviously, there’s a lot of basic materialist economic determinism in all my work—looking at how funding opportunities helped shape anthropology is a basic theme in all my books.

Let me ask you the same question about your graduate work with Laura Nader. What can you say about your work with such a legendary figure in the discipline, and what are the identifiable impacts of Dr. Nader on your current work?

RJG: It’s funny that you mention this—I recently wrote an article for Public Anthropologist where I go into detail about how her approach has shaped my research. Like Chicago, the Berkeley anthropology department was steeped in postmodernism and trendy French philosophy in the 1990s, but Nader didn’t push it on her students—in fact, she suggested that we not spend too much time on it; she thought it more important to get a solid foundation in the history of anthropology, and I did. Students who were interested in pressing contemporary issues gravitated to her because she had experience doing critical research on powerful institutions. That’s the most obvious connection between her work and my current research on military technologies, plus of course her work on controlling processes. Some of my peers seemed frightened by Laura Nader—she’s always been a straight shooter—but I appreciated her candid feedback. She encouraged us to write clearly and for multiple audiences, including laypeople. We’re both lucky to have been trained by anthropologists who wrote intelligibly!

Let’s get into your new book. In the first few pages, you mention that the American public long opposed the idea of intrusive, centralized surveillance. Can you talk about how US spy agencies succeeded in normalizing widespread surveillance over the last century? As you know, over the past few years billions of people, including Americans, now appear more than willing to subject themselves to digital surveillance—online and elsewhere. How did we get to this point?

DHP: Americans once abhorred government and corporate surveillance. Survey data and reactions to public events during the early and mid-20th century shows most Americans believed that wiretaps, even wiretaps of criminals, violated basic privacy rights. In the mid-1970s when the post-Watergate Church and Pike congressional committee hearings disclosed the extent of illegal governmental campaigns monitoring American political activities, there was broad public outrage, and some short-lived measures to provide oversight of US intelligence agencies were put in place. But historical memory is a fragile thing, easily dislodged by fear. While distrust of government surveillance remained, decades later the rise of the internet and capitalism’s uses of metadata, tracking consumers and proving rewards for surrendering to surveillance capitalism, Americans were socialized to accept being tracked with things like supermarket loyalty programs and the proliferation of traffic surveillance cameras. But the widespread fear, and rapid adoption of the US PATRIOT Act following the terror attacks of September 2001 paved the way for government surveillance operations to spread with little resistance.

RJG: 9/11 really accelerated things, didn’t it?

DHP: It did. When news of secret plans for the Total Information Awareness program was leaked in early 2003, a public outcry followed news that this proposed program planned to collect a broad mix of surveillance data from things like traffic cameras, credit card purchases, cell phone and internet activities, for use by US intelligence agencies. While plans for Total Information Awareness were scuttled after the public outcry, most elements of the surveillance dream were in fact developed by US intelligence agencies—as we know from Edward Snowden’s leaks and other sources. By the time Snowden revealed the existence of these vast secret governmental data dragnets essentially monitoring all our electronic lives all the time, after years of fear conditioning in the Terror Wars and the spread of corporate metadata programs, the American public had become numb to such concerns. Previous generations of Americans would have demanded congressional investigations or called for the standing government to fall, but nothing of consequence followed Snowden’s revelations, itself a measure of the extent to which Americans had become socialized to accept nonstop invasive surveillance as a social fact.

RJG: This is important background information for understanding the current moment, isn’t it? I mean, one way to interpret what you’re saying is there’s a dotted line connecting Safeway Rewards and Tesco Club Cards to full-on NSA and MI-5 surveillance dragnets. At the beginning, lots of consumers were willing to trade a bit of personal information for slight discounts on potato chips or toilet paper, and then it grew from there in the post-9/11 period—an Orwellian slippery slope.

If this is true, then the situation in the US has at least some similarities to China’s digital police state. There, it seems that much of the population is willing to comply with intrusive surveillance, geospatial monitors, mandatory biometric scans, and so on in exchange for more efficient delivery of services, with little awareness or concern that the government has corralled millions of Uighurs into internment camps using these same technologies. We don’t have internment camps in America (unless of course you count prisons), but as in China, many people are OK giving up intimate personal data for convenience, to get free email, social media accounts, or other services.

DHP: Your book, War Virtually describes a vast array of cutting-edge creepy military technological developments and you express serious concerns that these well-funded programs present dangers to an open and free society. Can you provide a brief overview of sorts of programs you examine in this book and what dangers they present?

RJG: Around 2010, the US military began stepping up its research on predictive modeling and simulation programs. They’re aimed at aggregating and analyzing massive datasets from satellite images, surveillance photos from drones, signals intelligence, open source data like tweets, news articles, and social media posts, reports from military and intelligence sources, and more. As you mentioned earlier, Total Information Awareness was cancelled, but elements of it were absorbed by the NSA—and overseas, the US military made use of a comparable DARPA program that came to be known as Nexus7. It reportedly harvested all of Afghanistan’s mobile phone and email data, along with other information like real-time geolocation data and biometric information. Nexus7 made all the information available to intel analysts, who presumably used it to target suspected insurgents.

I also look at the weaponization of social media by reviewing the case of SCL Group, a British defense contractor that was the parent company of the infamous Cambridge Analytica. Propaganda firms are proliferating today, and although they usually call themselves things like “political consultancies” or “strategic communications” companies, they often cross over into psychological operations. What’s new is that many of them now specialize in microtargeting individual users online with algorithmically-driven messages and ads tailored to their personality profiles. Social media firms like Facebook and Twitter have enabled this kind of mass manipulation and have so far evaded meaningful regulation in the US.

DHP: Your book also discusses robotic systems—can you talk about them?

RJG: Early on, I talk about military robots, especially autonomous and semi-autonomous weapons and surveillance systems. Drones are the centerpiece—they’re advertised as “precision” weapons, but drone strikes have killed thousands of civilians in Central Asia and the Middle East. Apart from this, the FBI has used surveillance drones domestically, here at home. It’s possible that in the future, we’ll be under frequent surveillance from drones, which have become incredibly cheap. Drones outfitted with cameras start at less than a thousand dollars, making it feasible for governments and corporations to spy on ordinary citizens. The democratization of drones brings a whole set of other dangers—imagine an improvised hobbyist drone with explosives flying into Times Square on New Year’s Eve.

DHP: In War Virtually, you write that large numbers of soldier, airmen, sailor and Marines distrust robots, while those designing the hardware of warfare seem to be increasingly relying on such developments. Tell us more about this tension.

RJG: Obviously, military and intelligence agencies aren’t monolithic. The military branches have always competed with each other for resources; the Pentagon’s civilian leadership regularly has differences with military officers; military brass are often out of touch with ordinary soldiers, and so on. There’s also a divide between rank-and-file soldiers and military researchers who work on robotic and lethal autonomous weapon systems. This shouldn’t be surprising—the designers won’t have to interact with the robots on the battlefield. But infantry troops, pilots, sailors, and other military personnel may someday be required to work with the machines. They’re worried—and they should be, because there have been multiple cases of semi-autonomous weapon systems that unleased lethal force on friendly troops.

DHP: Where do you see this going over the long term?

RJG: The four main branches of the military all have R&D labs where social scientists—mainly psychologists—are conducting “trust calibration” research, looking for ways to overcome soldiers’ mistrust of robotic systems. They’re experimenting lots of techniques: developing anthropomorphic designs, programming a sense of “ethics” into the AI software, implementing new military training, and creating better user interfaces.

It’s hard to say whether or not the they’ll succeed. For decades, the military has used fairly simple psychological techniques to overcome soldiers’ aversion to killing other humans. Now, military researchers are working hard to develop techniques that might persuade soldiers to unquestioningly place their trust in robots. It would be foolish to assume that they’ll fail—it’s possible that military personnel might be capable of dehumanizing others, as they have for centuries, while simultaneously humanizing the robots sent to kill them.

DHP: Since we’re talking about robots, I have to ask you about what happened recently in San Francisco, where the city’s Board of Supervisors approved, then reversed, a policy allowing the police to use killer robots in certain situations. We seem to be on the precipice of an historical moment where combinations of AI and various hardware developments make it attractive to deploy remote killing machines that will bring new forms of depersonalized death. What can you tell us about where we are at this moment, where we seem to be going, and is there any way to stop what seems to be the coming 21st century robot wars?

RJG: You’re right—we’re at crucial moment, a time when public and private institutions are rapidly adopting semi-autonomous and autonomous systems that use AI and machine learning for all kinds of things, including surveillance and killing. Now’s the time to push back hard. The main reason that the SF Board of Supervisors backpedaled and decided to not allow city police to use lethal robots was public outrage—citizens were concerned about the dangers of unleashing these new technologies on the streets, and policymakers got the message. None of this is inevitable, but it depends on public resistance. What makes things challenging is the fact that there are other countries—China, for example—that have embraced these technologies. In other words, this isn’t just an American problem, it’s a global problem.

In the San Francisco case, the media didn’t really look into the question of how many other cities have already used remote-controlled lethal robots. For instance, in 2016, the Dallas police department used a robot, loaded with explosives, to kill Micah Johnson after he gunned down several officers. Johnson was a troubled Army veteran who suffered from PTSD and other mental health problems after returning from the war in Afghanistan. Another thing that most media outlets missed is the fact that the SF Board of Supervisors didn’t ban police robots for surveillance.

Let’s shift the topic a bit—your book lays bare the past, present, and future of American surveillance. You’ve found that US spy agencies have been much more likely to scrutinize the activities of those on the political left—say, labor organizers or socialists—than those on the right. Why are these patterns a recurring theme over the past century?

DHP: My book argues that since its creation, the FBI has always been American capitalism’s police. J. Edgar Hoover cut his teeth at the FBI’s predecessor, the Bureau of Investigation, on the 1919 Palmer Raid attacks on foreigners accused of polluting America with efforts to democratize workplaces and fight for workers’ rights—arresting 10,000 leftist radicals, deporting Emma Goldman and hundreds of other radicals. During the 1950s Red Scare, the FBI ran massive surveillance campaigns on labor organizers, activists for racial equality and school integration, and others struggling for equality, claiming these people were communist threats to America. Some of these people were socialists, communists, or Marxists, others weren’t—but all of them were threats to American capitalism’s rigged system of inequality; and the FBI protected this inequality. It’s no accident that the FBI has historically devoted far more energy monitoring and harassing leftist groups while devoting relatively little to focus on violent fascists, who are essentially aligned with the basic tenants of our capitalist system.

RJG: So it sounds like spy agencies have been connected to corporate capitalism from the beginning.

DHP: Links between US military and intelligence agencies supporting global capital have long been with us, and there have historically been whistle blowers from inside the machine who’ve sounded alarms about these connections—people like USMC Major General Smedley Butler, author of War is a Racket—who sounded the alarm on robber barons leading the US into war and trying to unseat President Franklin Roosevelt—and CIA officer Philip Agee—who risked his life publishing his 1975 Inside the Company: CIA Diary. My guess is that as global climate change increasingly raises questions about whether humankind can survive capitalism, the FBI and CIA will increase their surveillance and harassment of individuals and groups whose work to save the planet from ecological destruction.

Since we’re talking about surveillance, I’m interested to hear more about how military contractors using things like big data and AI for what you describe as almost precog predictive modeling—like something out of Philip K. Dick’s book The Minority Report. Most of these activities you describe in War Virtually have military applications for controlling rather than liberating people. Can you describe the basics of some of these operations?

RJG: Ever since Pearl Harbor, US military and intelligence agencies have been obsessed with collecting as much information as possible, not only to intercept attacks on American interests, but to anticipate perceived threats, at home and abroad. For much of the past century, human intelligence agents analyzed intelligence data flowing in from around the world. But over the past decade, military and intelligence agencies have been pouring lots of money into predictive modeling programs. These software packages aggregate and analyze huge amounts of real-time information—social media trends, geolocation data from cellphones, online news reports, satellite photos, video feeds from surveillance drones—and archived information such as biometric data, demographic records, credit and property records, and so on. An early example of this was an effort by Air Force researchers to create a “social radar” capable of seeing into the hearts and minds of people. Another example was Nexus7, which I mentioned earlier. Defense firms developing these technologies use various proprietary forecasting algorithms based on predictive models—Bayesian statistical models, agent-based models, discrete event simulation models, epidemiological models, and others.

Maybe the biggest problem with these programs is the classic “garbage in, garbage out” dilemma—predictive modeling software is only as good as the data that goes into it. Much of it is flawed or biased, and many of the models rely on false analogies. One example that I mention in the book is a predictive modeling program based on epidemiological models. In these models, a central assumption is that ideas are comparable to infectious disease—in other words, the spread of “dangerous” ideas are the predominant motivation for protests or uprisings—as if dire economic conditions, severe political repression, retribution or other motivations aren’t important. Incidentally, none of these programs predicted the rise of ISIS or Russia’s invasion of Ukraine.

DHP: You also write about how these technologies are being used on the home front, for policing America’s cities.

RJG: Exactly—they’re being used domestically, by local police departments. Because these predictive policing programs typically use algorithms that incorporate historical crime data, they tend to result in increased surveillance in poor and minority neighborhoods.

Another point about predictive modeling programs: they can give military and intelligence analysts a false sense of confidence. It’s easy to imagine scenarios in which predictive analytics programs make it easier to pre-emptively launch missile attacks on civilian targets by mistake, or detain an innocent person who’s incorrectly identified as a threat. By their very nature, predictive modeling programs tend to absolve human decision makers of responsibility. Many military elites are blinded by techno-optimism, and have a vested interest in adopting high-tech solutions.

DHP: What do you see as humanity’s best hope for resisting being managed by these types of programs?

RJG: There’s a growing tech resistance movement made up of current and former researchers, scientists, and employees of technology firms, including the giants—Google, Microsoft, Amazon, Facebook-Meta. We’ve seen several examples of these workers pushing back against the militarization of their companies: Google researchers protesting Project Maven (a Pentagon contract using AI to analyze drone footage); Amazon workers’ protests against the use of facial recognition technology by US Immigration and Customs Enforcement; and Microsoft employees’ opposition to a deal providing augmented reality headsets to the US Army. There are also non-profits and activist organizations like Tech Inquiry, EPIC (the Electronic Privacy Information Center), Own Your Data, Mijente, and NeverAgain.tech. There’s reason for optimism, but much more still needs to be done.

Speaking of activists: a significant portion of The American Surveillance State examines how FBI spied on intellectuals, people like Seymour Melman, Edward Said, Saul Landau, Alexander Cockburn, Andre Gunder Frank—critics of American foreign policy, corporate capitalism, or US militarization. But the FBI also spied on economist Walt Rostow, a strident anti-communist who served as LBJ’s national security advisor. Why were they after Rostow? He was a champion of American empire.

DHP: During the early Cold War, the FBI was so paranoid that it suspected pretty much anyone working on issues of poverty might be a communist. The notion that Walt Whitman Rostow might have been any flavor of Marxist was nuts—especially when you consider Rostow’s contributions to advocating for the genocidal bombing campaigns of Vietnamese civilians under Operation Rolling Thunder. When anthropologist Oscar Lewis began studying the “culture of poverty” in the 1960s, the FBI intensified investigation of him, including spying on him while he conducted ethnographic fieldwork in Mexico, as if concerns about poverty and inequality made him a Marxist. Rostow’s dad had radical roots, naming Walt’s brother Eugene Victor Debs Rostow, and an aunt the FBI believed to be involved in radical politics, and no matter how many communists Rostow helped kill, the FBI clung to the crazy idea he might be some sort of crypto-communist. It didn’t matter that he titled his magnum opus A Non-Communist Manifesto, the FBI was so packed with conservative anti-communists that it couldn’t understand Rostow was a liberal anticommunist. The early Cold War had lots of liberal anticommunists (funding liberal intellectual CIA funding fronts) while the FBI was packed with paleoconservative anticommunists.

RJG: Given the ways in which The American Surveillance State has historically zeroed in on intellectuals and activists, can you talk about what the risks are today, in an era of online Zoom classes, cellphones, and social media?

DHP: America has long embraced anti-intellectualism, but there’s something new today with anti-science movements rejecting basic findings about things like climate change, vaccines, COVID, or impacts of poverty, especially rejecting scientific findings that challenge unregulated growth of capitalism. In some sense this new level of political distrust of the findings of various branches of the physical sciences is catching up to the long-held distrust of social science data that challenges basic tenets of faith in capitalism to meet human needs. The movements to police critical thought in US schools and universities is part of these same efforts to monitor and limit free inquiry, and these developments continue the sort of past McCarthyistic tactics I analyze in The American Surveillance State. When we add new levels of public surveillance—including social media, everyone walking around with recording and tracking devices in their phones, 24-hour reactionary “news” coverage—to these old McCarthyistic tropes and we’re living in a new sort of constant surveillance bubble.

Many of the McCarthy-era thought police functions performed by the FBI in the 1950s are today outsourced to reactionary private groups and “news organizations” who target intellectuals critical of various US policies. But we know from documents made public by Wikileaks, Snowden and others, that the existence of ongoing tracking and surveillance of dissidents continues on a massive level, and we can assume this data will continue to be used as it has historically been used: to monitor those who are challenging the fundamental tenets of American capitalism and inequality. Obviously social media, cellphone location data, zoom-based classes, and other basic features of our internet world make the mechanics of surveillance relatively simple.

RJG: Do you have any suggestions for how ordinary people can escape this kind of high-tech surveillance? Or at least keep it to a minimum?

DHP: Your behavior is probably a better model for escaping some of this surveillance than mine—you are pretty much off Twitter, Facebook, China’s TikTok surveillance system and other forms of social media, while I do some of these mundane forms of shared legibility. Not to sound like someone advocating wrapping your head in tinfoil, but doing things like keeping the GPS off on phones, resisting consumer tracking discounts, blocking cookies, using apps and services like Signal and Protonmail, distrusting non-encrypted cloud servers and being aware that we’re leaving digital trails everywhere we go has some impact, but advocating for the sort of policies on digital tracking that exist in the EU is important. Really the most important thing we can do is work to impose legal limits on corporate and intelligence agency surveillance of our electronic selves. Given how so many of America’s MSNBC liberals now looks to the FBI and CIA as potential saviors, I think the US is a long way from this happening—but history is nothing but change, so I assume we’ll eventually get to a breaking point where calls to limit surveillance will rise.

Check Also

The Western Balkans At A Crossroads: An Old War From In New Geopolitical Compositions (Part II) – OpEd

The Western Balkans is transforming into one of the primary fronts of confrontation between global …