The growing number of political victories obtained with the help of the social media, as well as the revelations about the complex manipulation processes for which they might be used, the launching of the “digital citizenship” and the extending use of the Bitcoin, the year-end crisis generated by the cyber attack against Sony Pictures which developed into a real terrorist threat against American cinemas were only a few signs in 2014 of what analysts already call the “maturation” of the new technologies that shape the life and the societies in the 21st century, or the digital “coming of age”.
a) Starting with October 2014, Estonia became the first country to offer e-residency and digital identity to people around the world, offering state-proven digital identities that give access to services like online banking, education, and healthcare. The issuing of smart ID cards was planned to begin towards the end of 2014, with anyone eligible to apply to become an “e-Estonian”. At first, the card will only be available from a Police and Border Guard offices in Estonia, however there are plans to extend the capacity to process e-residency applications at Estonian embassies abroad by the end of 2015.
The issuing of digital identities comes as part of the so-called “e-Estonia” initiative that aims to make the Baltic country “one of the most advanced e-societies in the world” through new digital infrastructure and collaborations between the government and the ICT industry. Moreover, the project aims to attract about ten million new “e-Estonians” from all over the world by 2025.
Currently, near 90% of the Estonian citizens use a digital ID card, which can be used to access all of Estonia’s secure e-services, due to the chip on the card that carries embedded files which, using 2048-bit public key encryption, enable it to be used as definitive proof of ID in an electronic environment. The card is used not only as proof of ID and travel document within the EU, but also as national health insurance card, proof of identification when logging into bank accounts from a home computer, pre-paid public transport ticket in Tallinn and Tartu, for digital signatures and i-voting, for accessing government databases and picking up e-Prescriptions.
While letting at least 10 million people around the world choose to associate with Estonia via e-identities is seen as generating additional security for the country, analysts see such a project as the beginning of the erosion of the classic nation state’s hegemony.
b) In November and December 2014, Sony Pictures suffered repeated cyber attacks by a group going by the name “Guardians of Peace”, which released illegally online several movies, as well as corporate emails. It was followed by terrorist attack threats against movie theaters that might show The Interview, a comedy about the assassination of the North Korean dictator Kim Jong-un. In the end, Sony Pictures announced that it will release the movie.
The reaction in the US was confused and panicky, with major cinema chains and distributors declaring they would not screen or market The Interview out of fears for customer safety, obliging Sony to cancel not just the film’s cinema, but also its home movie, release. Although it has already been publicly premiered in Los Angeles, the movie seems destined to become the Californian equivalent of cold war samizdat – covertly viewed, subversively disseminated – or a collector’s item, possessed by the supposedly fortunate few.
Most analysts considered that whatever their motivation, those in the US responsible for preventing the film being screened have, in effect, handed a victory to the hackers, to blackmailers, to actual and would-be terrorists of every stripe and to the North Korean regime that, despite its denials, has been identified by the FBI and South Korea as the dark force behind the hack attacks. In their opinion, this victory for intimidation amounts to a defeat for America’s principle of freedom of speech and expression that cannot be allowed to stand, as President Obama said when he finally focused on the affair: “We cannot have a society in which some dictator some place can start imposing censorship here in the United States…”.
The incident brought to attention the threat posed by cyber warfare, which is significant because it has no limits and is growing in frequency and potency. Cyber attacks via the internet are global in reach, anonymous and untraceable by nature and hard to deflect.
In this context, North Korean defectors disclosed that Pyongyang developed consistent cyber war resources within the so-called Bureau 121, a part of the General Bureau of Reconnaissance, an elite spy agency run by the military. Bureau 121 is staffed by some of the most talented computer experts in the state, and is involved in state-sponsored hacking, used by the Pyongyang government to spy on or sabotage its enemies. Military hackers are handpicked and trained from as young as 17, at North Korea’s military college for computer science, or the University of Automation. The hackers in Bureau 121 come from among the 100 students who graduate from the University of Automation each year after five years of study. Over 2,500 apply for places at the university, which has a campus in Pyongyang, behind barbed wire. Bureau 121 has about 1,800 cyber-warriors, and is considered the elite of the military. The teams working abroad are covered as employees of North Korean trading companies.
Outside of that division, there are a host of associated units. These include Unit 35, which trains North Korea’s digital soldiers, and the 225th Bureau, or Office 225, which does some cyber work but tends to be focused on corporeal infiltration into enemy states. There also have been some suggestions that North Korea borrows from the very capable Chinese cyber divisions. The country is a major customer of China Unicom and Unit 121 is believed to have operations on Chinese soil.
Computer analysts stress that, given the increasingly digital nature of warfare in the world, it makes economic sense for North Korea to pour resources in entities such as Bureau 121. On the digital battlefield, attackers have a distinct advantage. For hackers to win, they only need to breach a system once, while defenders must deflect each and every attack to be successful. North Korea’s cyber warfare capability was recently acknowledged by Gen. Curtis Scaparrotti, the commander of United States Forces Korea, who testified before the U.S. Congress[1] that North Korea is emphasizing the development of its asymmetric capabilities and its hackers are capable of conducting cyber-espionage as well as disruptive cyber attacks.
On the other hand, by blocking Internet access for most of its population, North Korea is able to develop a cyber defensive barrier, protecting North Korean infrastructure from the very cyber security threats Bureau 121 is dedicated to exploiting. Because North Korea has few Internet connections to the outside world, anyone seeking intelligence on its networks has to expend more resources for cyber reconnaissance.
c) Another significant case was Facebook’s response to the UK government, which accused Facebook of having blood on its hands over the killing of soldier Lee Rigby. Facebook had no confident, insightful position and was silent, leaving assumptions and accusations to keep building.
Fusilier Lee Rigby, a British Army soldier who was off duty, was attacked and killed on 22 May 2013, near the Royal Artillery Barracks in Woolwich, southeast London. The assassins, British citizens of Nigerian origin and converted to Islam, said that they had killed a soldier to avenge the killing of Muslims by the British armed forces. On 19 December 2013, the attackers were found guilty of Rigby’s murder and were sentenced to life and 45 years imprisonment, respectively. On 25 November 2014, the findings of the British parliamentary inquiry into the murder mentioned that one of the killers had discussed killing a soldier on Facebook with a foreign-based extremist known as ‘Foxtrot’. The UK authorities did not have access to the details of the conversation until June 2013, and claimed that had MI5 access to this exchange, the investigation would have become a top priority. Facebook said that it did not comment on individual cases, but responded that “Facebook’s policies are clear, we do not allow terrorist content on the site and take steps to prevent people from using our service for these purposes”. However, Facebook had blocked, prior to the killing, some of the assassin’s accounts which had been flagged for links with extremism. The accounts had been flagged by an automated process, and no person at Facebook had manually checked the accounts.
This brings to attention the problem of ethics and oversight. Social networks, most noticeably Facebook, Twitter and YouTube, have developed sophisticated processes to try to deal with terrorist propaganda, for example the brutal execution videos of Isis. But these policies only confirm that these companies – despite their claims of neutrality – now have to make editorial judgments, without the complex skills, experience and legal context of editorial organizations.
It also means networks with international plans for growth are making subjective decisions, taking into account the western definition of terrorism, the western definition of law, the western definition of free speech. Is it just one of many problems for the chief executive, or the board, to consider the ethical implications of a network that is changing, in Facebook’s case, how 1.36 billion people – in all their norms and extremes – relate and interact and communicate?
Such developments and many others generate a growing sense of mistrust about the new technologies that most people came to rely on. Many users have come to feel that the relationship with them is that they offer a long list of terms and conditions that are not read and in return the user agrees to be advertised to, or about, or to be tracked and monitored by the government.
Also, most of the public and the governments failed to grasp the significance of Edward Snowden’s surveillance revelations, mainly the fact there was no effective oversight in the case of the new technologies.
On the other hand, the new technology creators and promoters are growing even with the risk to become a new “bubble”. The big ones, such as Amazon and Google, are spending on warehouses, offices, people, machinery and buying other firms. On the other hand, on the booming private markets, venture capital (VC) outfits and others trade stakes in young technology firms. The spending boom is exemplified by Facebook, which recently said that its operating costs would rise in 2015 by 55-75%, far ahead of its expected sales growth. Silicon Valley’s icons are among the world’s biggest investors. Together, Apple, Amazon, Facebook, Google and Twitter invested $66 billion in the past 12 months. That is eight times what they invested in 2009. It is double the amount invested by the VC industry. Together these five tech firms now invest more than any single company in the world: more than Gazprom, PetroChina and Exxon, which each invest about $40 billion-50 billion a year. The five firms together own $60 billion of property and equipment, almost as much as General Electric. They employ just over 300,000 people.
1. From Leibniz to Facebook: the history behind the new technologies
The power of the technologies that became an indispensable part of the post-modern world is based on the code that all computers and digital gadgets run, that make them able to control with the same facility games, spaceships and communication.
The code that stays at the base of today’s digital world is traced to the binary code, an idea of the German philosopher and mathematician Gottfried Leibniz developed in 1679. Leibniz created a system that used only two digits: 0 and 1, and for that reason he called it “binary”. Leibniz also imagined a mechanical calculator, in which marbles could fall through an open hole to represent one and remain at a closed hole to represent zero. This calculator was never built, but Leibniz’s idea paved the way for the whole history of computing.
Leibniz’s binary code was put to use more than a century later, in 1804, when French inventor Joseph Jacquard designed an automated steam-powered weaving loom controlled by a punched cardboard. The presence or absence of a hole in each position programmed the loom to weave a certain pattern. A different punched card would make the loom weave a different pattern. The cards were effectively instructions for the loom – a forerunner of the modern computer program.
Jacquard’s idea came back to mathematics in 1842-43, with British mathematicians Charles Babbage and Ada Lovelace being credited as the ancestors of the hardware and software concepts. Charles Babbage took further Jacquard’s idea and designed the ‘Analytical Engine’: the first general purpose calculating machine. His idea was that punched cards would feed into the machine numbers and instructions about what to do with those numbers. Fellow mathematician Ada Lovelace described how punched cards could program the ‘Analytical Engine’ to run a specific calculation. Although the engine was never built and her program never ran, Lovelace is now widely credited as the world’s first computer programmer.
Jacquard and Babbage’s punch cards were used by the US census clerk Herman Hollerith, who in 1890 designed the ‘census machine’ which provided a solution for the US census at the end of the 19th century, when it appeared that it would take eight years to manually record every citizen’s data. The machine used the new technology of electricity to encode each person’s census data in a punched card. A field of pins was pressed down into the card, and if pushed through a hole, would complete an electric circuit and be logged. Hollerith turned his invention into a business that would later become the IBM computer company.
In the early 20th century, the idea of using electricity to create code was picked up by military planners. At the end of World War One, German engineer Arthur Scherbius designed the Enigma machine, which could encipher and decipher secret coded messages. It soon became commercially available and was followed by more complex models. The Enigma code, used by Nazi Germany in World War Two, was cracked by British mathematicians working at Bletchley Park and this has been credited with shortening the war by two years.
In 1936, one of the Bletchley Park mathematicians, Alan Turing, developed the “Universal Turing machine”, based on the idea of feeding one machine many different instructions – using binary code, in fact a multi-purpose computer.
Turing described a flexible machine that followed instructions on a long piece of tape – the equivalent of a modern computer’s memory. Because the coded patterns on the tape could easily be changed, the machine would be able to carry out almost any task. However, building a vast memory of instructions out of paper tape was impractical. In 1948, engineers at Manchester University found a way to store memory using electric charges instead – a technique inspired by the wartime radar equipment. This allowed them to build the first ever working general-purpose computer – the Manchester Small Scale Experimental Machine. Nicknamed “Baby”, it was programmed in binary code, contained 128 bytes of memory and filled an entire room. The “Baby” soon became the prototype for the first general-purpose electronic computer to be sold commercially in 1951, the “Ferranti Mark 1”, which, in the same year, was the first machine to play computer-generated music. The music program was written by a friend of Alan Turing, the British computer scientist Christopher Strachey.
Only ten years later, in 1961, three young programmers at the Massachusetts Institute of Technology were given the chance to experiment with an unusually “small” (it was still the size of two refrigerators) computer, the PDP-1, and designed “Spacewar!”, which is considered to be the first videogame. Two players, each controlling a spaceship, were tasked with destroying the other while orbiting a star. The game introduced many of the concepts familiar to game players today, including real-time action and shooting.
As smaller computers were developed, that could be built into the design of other machines, this also unlocked the possibility of space travel. In 1966, the Apollo Guidance Computer system was designed for the NASA Apollo space program, and within three years it helped Neil Armstrong and Buzz Aldrin reach the surface of the Moon. With only 74KB of memory, it was able to control a 13,000kg spaceship, orbiting at 3,500km/h around the Moon, land it safely and return it back to Earth.
Computing entered a new era in 1971, when Intel Corporation released the first commercial microprocessor. Based on the new silicon technology, the Intel 4004 packed the processing power of a computer in one tiny chip. Initially commissioned for a Japanese electronic calculator, the chips were soon being used in a wide range of machines, including some of the first home personal computers.
In 1975, computing enthusiasts in Silicon Valley, California, founded the “Homebrew Computer Club” to exchange ideas. The hobbyists built computers and wrote programming languages that could run them. Members included Steve Wozniak, who built the first Apple computer, which used a version of Beginner’s All-purpose Symbolic Instruction Code (BASIC). Another computing enthusiast of the time (but not a member of the Club), Bill Gates, focused on the software, writing Microsoft BASIC.
In the early 1980s, architects began using computer-aided design (CAD) programs in to help design and draft bold new structures. Instead of laboring over paper drawings and handmade models, computers allowed the designers to test new materials and construction techniques more quickly, accurately and cost effectively. Today, CAD has not only revolutionized architecture and engineering, but is empowering creative minds from fashion to landscaping.
In 1986, the deregulation of the stock market – known as the ‘Big Bang’ – saw computers revolutionizing financial markets. Out went the old system of traders shouting and gesturing buy and sell orders on a physical trading floor. In came electronic trading, with trades taking place in virtual market places.
One of the most ambitious projects to have used the growing power of computers to manipulate large amounts of data was the 1990 Human Genome Project – the bid to map all three billion letters in the human genetic code. The project lasted over a decade. The human genome was cut into random overlapping fragments, allowing the DNA sequence of each fragment to be worked out. Software written by computer scientists at the University of California Santa Cruz was then able to rapidly identify overlapping sequences and piece the genome together.
In the last decade of the 20th century, scientists were starting to see computing not only as a way to perform tasks, but also as a way to share and collaborate. In 1991, British computer scientist Tim Berners-Lee invented a system for linking documents and information via hyperlinks. He called it the ‘world wide web’. It could run on almost any computer, so that anyone connected would be able to access any information on the web. And because Berners-Lee never patented his technology, it quickly spread. Within five years, there were 100,000 websites worldwide. Today there are estimated to be in excess of half a billion, and the science experiment became a cultural phenomenon.
As the number of web pages rose dramatically, it became harder to find information. The web was in danger of becoming a victim of its own success.
In 1997, two students at Stanford University, Larry Page and Sergey Brin, devised a way to measure the popularity of web pages, based on how often a page was linked to. What began as an academic project soon turned into a business venture, and the new search engine – named Google – would become the way that most people found what they were looking for on the web.
The social communication breakthrough came in 2004 with Facebook, a $80bn social network created by psychology student Mark Zuckerberg and his roommates in their college dorm at Harvard University. ‘Thefacebook’, as it was originally known, quickly expanded to other Ivy League universities and to the world beyond. Today, Facebook programmers publish new code up to twice daily to build new features, allowing global, real-time services for the 802 million users logged on every day.
In 2008, the world became acquainted with the individual computer programs that run on Smartphones, known as apps, short for applications. Apple launched the first app store in July 2008, followed several months later by the Android Market. The Apple App Store saw 10 million downloads in its first weekend, with Facebook being the most popular app by the end of the year. Smartphones are a fraction of the size of the original electronic computers – but with a memory that can store 100 million times more information.
2. Artificial Intelligence: on the way to become a threat
According to many sociologists, the next generation of digital devices will be “hidden”, meaning they will be an invisible, effortless part of our lives, which will bring both benefits and dangers.
Although now screens draw constant attention to themselves and the machines are more and more time-demanding, subsequent generations might not even consider them to be technology, and computing will finally slip beneath our awareness. Computer scientists have been predicting such a moment for decades. The phrase “ubiquitous computing” was coined at the Xerox Palo Alto Research Center in the late 1980s by the scientist Mark Weiser, and described a world in which computers would become what Weiser later termed “calm technologies”: unseen, silent servants, available everywhere and anywhere. In fact, we are beginning to see a movement away from screens, towards self-effacing rather than attention-hungry machines.
One significant example is Google Glass, which represents the kind of devices that are “invisible” in the most literal sense: because a user’s primary interface with them is not through looking at or typing onto a screen, but via speech, location and movement. This category also includes everything from discrete smartwatches and fitness devices to voice-activated in-car services. Equally surreptitious are the rising number of “smart” buildings – from shops and museums to cars and offices – that interface with smartphones and apps almost without us noticing, and offer enhancements ranging from streamlining payments to “knowing” our light, temperature and room preferences.
Invisible lethal robots
The “invisibility” factor becomes more worrying when it characterizes the new generation of weapons. While science fiction has imagined that the armies of the future would get bigger and scarier, technological advances appear to be making them smaller and harder to see. The invisible army patrolling and attacking land, sea and air might in fact be much more powerful than any of the spectacular forces imagined in fiction.
Drones are the main component used in the new race in military technology to create weapons that can go mostly unnoticed, while at the same time managing for control on the battlefield and during civil unrest.
Russia recently announced its plans to develop “invisible” weapons for its nuclear submarines, including on-board battle robots and underwater drones. The country’s new fifth generation submarines could feature drones that can be released by submarines and stay still, while the ship itself moves away. That would allow the submarine to evade anyone watching by giving the impression it has stayed in place, while only the drone has done so. The drones would be released by the submarine and stay offline before being remotely activated on command, which will give the submarine time to leave the area, with the drone staying in place to maintain a semblance that the submarine is still there.
China’s long-range heat ray — which can cause searing pain by heating up water molecules under the skin, but is invisible to anyone else around, is similar to a technology that the US and Russia already have. The US Navy Laser Weapon, deployed on the USS Ponce, can shoot down small drones and hit targets on fast attack boats. It uses heat energy from lasers to incinerate drones and other targets.
These add to the U.S. army of drones, which patrol and attack targets including those in the Middle East, but are barely acknowledged by the country’s government.
U.S. drone operations: the same legal background as the torture program
One of the aspects revealed by the recently published 525-page executive summary of the Senate Intelligence Committee’s report on torture, which was released to the public on Tuesday, Dec. 9, was the purported absence of presidential leadership both in the case of the torture and the drone programs, as well as the fact that the same procedures that allowed deniability in the case of the torture were – and continue to be – used in the case of the drone attacks.
In the case of the torture, the report mentions claims about the ignorance of President George W. Bush (and to a much lesser extent, Vice President Dick Cheney) about key parts of the CIA torture program. The report leaves the impression that Bush remained ignorant of the details of the torture program, and does not describe events in which the White House is known to have been involved in. Analysts revealed that the report does not mention who approved the program. The report only mentioned that National Security Council legal advisor John Bellinger told CIA Director George Tenet’s staff on Aug. 2, 2002, that the National Security Advisor (at that moment Condoleezza Rice) had been informed that there would be no briefing of the President on this matter, but Tenet had policy approval to employ the CIA’s “enhanced interrogation techniques.” The report did not mention who decided the president would not be briefed and, in that case, who could authorize a torture program.
In fact, the ultimate authorization for the program was included in a “Memorandum of Notification” (MON) that Bush signed on Sept. 17, 2001, shortly after the 9/11 attacks.
A MON (or “finding”) is a procedure used by the president to notify Congress about operations he orders the CIA to conduct that are not intended to be acknowledged by the U.S. government. According to the 1947 National Security Act, when the president authorizes the CIA to conduct covert operations, he must document what those operations will be and notify the Senate and House Intelligence committees. The procedure is used to provide legal cover for acts that would otherwise often be considered illegal.
During the days after 9/11, Cofer Black, who served as director of the CIA’s counterterrorism center, laid out a program to combat al-Qaeda that included not just the detention of top al-Qaeda figures but also the outsourcing of torture to the intelligence services of allies like Egypt and Jordan (and even adversaries like Syria and Libya), as well as the targeting of top al-Qaeda figures in drone strikes. Black combined the detention and the drone killing of al-Qaeda members in one MON. The MON was also designed to outsource all the important decision-making to the CIA, in order to give the President deniability. It called for the President to delegate blanket authority to the CIA Director to decide on a case-by-case basis on the targets for killing or torture.
According to several analysts, the White House deniability that the MON provided also created the conditions for the program to spin out of control, an aspect which is considered even more worrying since it is the same MON that authorizes the CIA’s current drone program. It was also mentioned that on the second day of Barack Obama’s presidency, he prohibited most forms of physical torture, but on the third, he authorized a CIA drone strike that killed up to 11 civilians.
The CIA’s use of drones is already seen to suffer from some of the same problems as the torture program. The CIA appears to have misinformed Congress about details of the program, despite the evidence that more than 1,000 people have been killed while targeting fewer than 50 terrorists. And like the CIA’s detention and torture of the wrong suspects, a number of drone strikes have killed the wrong people, but with even greater frequency.
Warnings from the scientists
The development of the drones and the many examples of computers finding new and creative solutions to problems across diverse fields that, before now, never occurred to humans already generates warning signals from some of the world’s top scientists, who see that the machines are slowly but surely getting smarter and the pursuits in which humans remain champions are diminishing.
When artificial intelligence firm Deepmind was acquired by Google in early 2014, founder Demis Hassabis stipulated that Google create an ethics body to inform on its work in machine learning, a technology that could be used to examine patterns in research to fight disease, but also to kill people more efficiently.
Professor Stephen Hawking recently expressed his worries about what follows when humans build a device or write some software that can properly be called intelligent. An artificial intelligence (AI) of such kind, he fears, could spell the end of humanity.
Similar worries were voiced by Tesla boss Elon Musk in October 2014, when he declared that AI might be the “biggest existential threat” facing mankind, as well as the University of Oxford’s Professor Nick Bostrom, who said that an AI-led apocalypse could engulf us within a century.
Google’s director of engineering, Ray Kurzweil, is also worried about AI and concerned that it may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software. In the same line of thought, other scientists point out that AI is only dangerous because of the way it amplifies human goals. According to Neil Jacobstein, AI and robotics co-chairman at California’s Singularity University, “ethical outcomes from AI do not come for free” and we have to “consider the consequences of what we were creating and prepare our societies and institutions for the sweeping changes that might arise…It’s best to do that before the technologies are fully developed and AI and robotics are certainly not fully developed yet”.
Other scientists stressed that those working on AI were not really putting in place safety systems to stop their creations running amok, even if the general estimate is that human-level AI will only be developed within the next 10-20 years.
However, for the time being, much of the work is concentrating on systems that specifically lack the autonomy and consciousness that could spell problems for the humans and the real danger is not expected to come from the AI itself but from the people it serves, the consciousnesses that set their goals. The typical example is again that of the drones that don’t kill unless they are programmed by people who instruct them to fly to specific coordinates and unleash a missile.
3. From Internet to Darknet and beyond
Invisibility and anonymity, as well as the growing power of manipulation also characterize the evolution of the Internet which, less that 25 years since its launching, seems to be losing some of its “democracy”.
According to the 2014-2015 edition of the Web Foundation’s annual Web Index released by the World Wide Web Foundation led by the web inventor Sir Tim Berners-Lee, the web is becoming less free and more unequal.
The index – which measured the web’s contribution to the social, economic and political progress of 86 countries – ranked countries around the world in terms of: universal access to Internet; relevant content and use; freedom and openness; empowerment. Four of the top five were Scandinavian, with Denmark in first place, Finland second and Norway third. The UK came fourth, followed by Sweden. This prompted the authors to conclude that the richer and better educated people are, the more benefit they are gaining from the digital revolution. Consequently, Sir Tim Berners-Lee called for considering the Internet as “a basic human right. That means guaranteeing affordable access for all, ensuring internet packets are delivered without commercial or political discrimination, and protecting the privacy and freedom of web users regardless of where they live.”
The report also revealed that web users are at increasing risk of government surveillance, with laws preventing mass snooping weak or non-existent in over 84% of countries and online censorship on the rise.
For the first time, the report looked at net neutrality, the principle that all web traffic should be treated equally. It called on policy makers to introduce a raft of measures to fight net inequality, including: progress towards universal access by increasing number of affordable net services; preventing price discrimination in internet traffic by treating the internet like any other public utility; investing in high-quality public education to make sure that no-one is left behind with technological progress; using the web to increase government transparency and protect freedoms of speech and privacy.
The guardians of the Internet
Other objections came recently with regard to the system and procedures that ensures that the internet itself, the central system that operates the World Wide Web, does not fall victim to hackers.
The system relies on 21 selected people who hold the keys to the central directory of the web (seven persons for the east, respectively west coast and seven for the backup procedures). They are the guardians of the domain name system (DNS), the register that links web addresses to IP addresses – the series of numbers which differentiate Internet-linked computers. If hackers were able to seize control of DNS, they would be able to control one of the most-crucial parts of the Internet.
These persons are members of the organization charged with overseeing the name and number systems of the Internet, the U.S.-based Internet Corporation for Assigned Names and Numbers (Icann), an American non-profit organization overseen by the U.S. Department of Commerce
The keyholders meet four times a year, on the east and west coasts of the U.S., to generate new encryption keys for the system (they ‘change the password’ for the internet), and to verify the security of this directory, which if breached could completely undermine the functioning of the global network. During the meetings, the 14 primary keyholders carry out a scripted series of more than 100 actions.
The keyholders are a select group of security experts from around the world, with long backgrounds in internet security and who work for various international institutions. They were chosen for their geographical spread as well as their experience – no country is allowed to have too many keyholders. They travel to the ceremony at their own, or their employer’s, expense.
Most of the keyholders have been with the organization since the first ceremony. The initial selection process began with an advertisement on Icann’s site, which generated just 40 applications for 21 positions. Since then, only one keyholder has resigned: Vint Cerf, one of the fathers of the internet, now in his 70s and employed as ‘chief internet evangelist’ by Google. At the first key ceremony, Cerf told the room that the principle of one master key lying at the core of networks was a major milestone and predicted that in the long run, this hierarchical structure of trust will be applied to other functions that require strong authentication.
The backup keyholders, who are seven persons selected from around the world, have a last-resort capability to reconstruct the system if something calamitous were to happen. Each of the 14 primary keyholders owns a traditional metal key to a safety deposit box, which in turn contains a smartcard, which in turn activates a machine that creates a new master key. The seven backup keyholders have smartcards that contain a fragment of code needed to build a replacement key-generating machine. Once a year, these shadow holders send to Icann a photograph of themselves with that day’s newspaper and their key, to verify that all is well.
Besides the keyholders, there are several witnesses at the ceremonies that are broadcast live on Icann’s site. Some are security experts, and two are auditors from PricewaterhouseCoopers (with global online trade currently well in excess of $1tn, the Internet key has a serious role to play in business security).
The master key is part of a new global effort to make the whole DNS (domain name system) secure and the internet safer: every time the keyholders meet, they are verifying that each entry in these online ‘phone books’ is authentic. This prevents a proliferation of fake web addresses which could lead people to malicious sites, used to hack computers or steal credit card details.
Currently, Icann is helping to roll out a new, secure system for verifying the web, expected to become functional in the next three to five years. If the master key were lost or stolen today, the consequences might not be calamitous: some users would receive security warnings and some networks would have problems, but not much more. With the new, more secure system, the effects of losing or damaging the key would be far graver. While every server would still be there, nothing would connect: it would all register as untrustworthy. The whole system, the backbone of the internet, would need to be rebuilt over weeks or months. What would happen if an intelligence agency or hacker got hold of a copy of the master key? It’s possible they could redirect specific targets to fake websites designed to exploit their computers – although Icann and the keyholders say this is unlikely.
There are concerns about such an essential process being the exclusive responsibility of an American non-profit essentially overseen by the U.S. government. In light of the Snowden disclosures, many governments already called for the system to be put on a global footing and overseen by a multilateral organization, such as the United Nations.
Both the U.S. commerce department and the Department of Homeland Security take a close interest in Icann’s operations. In the wake of the ongoing revelations of NSA spying, and of undermined internet security, this does not sit well with many of Icann’s overseas partners. Some, including Russia and Brazil, are calling for a complete overhaul of how the internet is run. The European Commission also wants changes to this system, though it still expresses its faith in Icann; the EU recently called for a “clear timeline for the globalization of Icann”.
Darknet and Tor: the future of the Internet?
For many, the future Internet seems to be the Darknet, which allows everyone to communicate and do business without fear of leaving digital fingerprints.
The Darknet is a subsection of the so-called Deep Web – the part of the World Wide Web that is not indexed and doesn’t show up on search engines or on social media. While most people never access it, it’s estimated to be about 500 times as big as the ‘surface web’. Other estimates suggested that the deep web is 4,000 to 5,000 times larger than the surface web. However, since more information and sites are always being added, it can be assumed that the deep web is growing exponentially at a rate that cannot be quantified.
The Deep Web (also called the Deepnet, Invisible Web, or Hidden Web) should not be confused with the dark Internet, the computers that can no longer be reached via the Internet, or with Darknet, a network which could be classified as a smaller part of the Deep Web. Some prosecutors and government agencies think the Deep Web is a haven for serious criminality.
A Darknet is a private network where connections are made only between trusted peers, or ‘friends’ (F2F), using non-standard protocols and ports. Darknet’s specific feature is that sharing is anonymous and users can communicate with little fear of interference. That is why darknets are often associated with dissident political communications and illegal activities.
The Darknet is accessed through Tor (an acronym for “The Onion Router” – just as there are many layers to the vegetable, there are many layers of encryption on the network), a web browser built to be as anonymous as possible, allowing a user to browse the internet without the IP address being given away. Tor sends internet data through a series of ‘relays’, adding extra encryption and making web traffic practically impossible to trace. One of the main ideas behind Tor was to be used to get around internet censorship in countries that take certain websites offline.
Tor was launched on August 13 2004, byRoger Dingledine and Nick Mathewson, members of Free Haven, a Massachusetts Institute of Technology research project that looked for ways to use data so that it could resist “attempts by powerful adversaries to find and destroy”. The third member of the Tor team was Paul Sylverson, a mathematician with a PhD in philosophy from Indiana University, who was working for the US Navy to find a way to use the internet anonymously.
While the Tor browser was initially developed by the U.S. military as a way of navigating the internet secretly, it had become an open source project. The military released the encrypted browser as a way of providing cover for their operations, but, since the Tor browser uses a non-standard protocol, people observing network traffic can identify it easily even if they can’t see what the user is looking at.
Tor continues to receive funding from the US State and Defense Departments, but observers agree that it has entered in a phase of a paradoxical relationship with his employer, the government of the United States. On the one hand, the authorities – who lie behind its creation in the first place – continue to heavily fund its development. On the other, they are seeking to destroy it.
According to the Tor Project’s latest financial statements, it received more than $1.8 million in federal funding in 2013, primarily from the State Department and Department of Defense, as well as filtered through independent organizations such as Internews Network, a non-profit network that aims to support freedom of information around the world. This amounts to about 60 per cent of its total funding.
At the same time, documents disclosed in October 2013 by Edward Snowden – who used Tor to send top-secret information to The Guardian newspaper – reveal that both the National Security Agency (NSA) and the British Government Communications Headquarter (GCHQ) have made efforts to disable Tor, or at least to remove anonymity from its users. Although Tor remained fundamentally intact, the two agencies were able to gain some success by targeting individual browsers when used in conjunction with Tor, and take control of targeted computers. This allowed them to view all the files on the machine, as well as all online activity.
The US government’s self-defeating approach was again manifested recently, when two researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, Alexander Volynkin and Michael McCord, revealed that they had launched a successful cyber attack on Tor between January and July 2014, and had unmasked a significant number of people using the network. They were due to present their findings but the event was cancelled for “legal reasons”.
In an official blog post, Roger Dingledine, one of the three founders of Tor, announced an immediate upgrade to the system, which would “close the particular protocol vulnerability the attackers used”. However, further scrutiny revealed that Volynkin and McCord’s department, the Software Engineering Institute, has received $584 million in funding from the U.S. Department of Defense – with the special target of finding security vulnerabilities.
Tor has an increasing number of users, including the military, law enforcement officers and journalists, who use it as a way of communicating with whistle-blowers, as well as members of the public who wish to keep their browser activity secret. It has also been associated with illegal activity, allowing people to visit sites offering illegal drugs for sale and access to child abuse images, which do not show up in normal search engine results.
Tor for freedom
Despite Tor’s criminal applications, partisans of the freedom of information refuse to condemn the network. Journalists and campaigners in countries such as Iran, Syria and China have found the network invaluable in avoiding detection by their governments. Russian President Vladimir Putin is said to be so worried about Tor’s potential for undermining his regime that he has announced a prize of four million rubles to anyone who can crack the network.
In 2010, Tor won the award for projects of social benefit at the Free Software Awards. In a statement, the judges said: “Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the internet while keeping them in control of their privacy and anonymity.”
Since many countries lack the equivalent of the United States’ First Amendment, Darknet grants everyone the power to speak freely without fear of censorship or persecution. According to the Tor Project, the Hidden Services have been a refuge for dissidents in Lebanon, Mauritania, and Arab Spring nations; hosted blogs in countries where the exchange of ideas is frowned upon; and served as mirrors for websites that attract governmental or corporate angst, such as GlobalLeaks, Indymedia and Wikileaks. The New Yorker’s “Strongbox”, which allows whistleblowers to securely and anonymously communicate with the magazine, is a Tor Hidden Service. The Tor Project says that authorities offer similarly secure tip lines, and that some militaries even use Hidden Services to create online secure command and control centers.
Activists have expressed worry that high-profile cases of abuse of the anonymous software might be used to demonize the technology more generally, with freedom of speech activists stressing that the use of encryption and other privacy practices must not be deemed a suspicious activity. Rather, it must be recognized as an essential element for practicing freedom of speech in a digital environment. Similarly, the Tor Project, which makes the browser, argued that it actually keeps normal people from becoming victims of crime, since Tor and other privacy measures can fight identity theft, physical crimes like stalking, and so on.
The next Internet?
Most analysts see the development of Darknet and Tor as the story of the maturation of the digital age. Nowadays the internet is grown up, it recognizes no boundaries, and it is very difficult to stop anything from happening. It is starting to reflect life more closely, in all its light and shade. The Darknet is moving from fringe to mainstream, attracting anyone who wants anonymity – be they hired killers or humble bloggers. The future of the net is likely to be an increased proliferation of these non-standard protocols that provide ever deeper levels of anonymity.
A significant example for this trend is the recent decision of Facebook, which became the first Silicon Valley giant to provide official support for Tor. With the new Tor service, all communication remains in the anonymous Tor network. Previously, some traffic would leave the closed network and access the open internet, potentially exposing a user’s location and other information.
The new set-up means all data is encrypted and Tor users are not mistaken for hacked accounts. Users could access the site “without losing the cryptographic protections” of Tor, Facebook said. One of the reasons for the recent move is that Tor may appeal to people in places where both Facebook and Tor are blocked, like China, Iran, North Korea and Cuba. China in particular has attempted to implement measures to disrupt the network and the creators of Tor have been engaged in a cat-and-mouse game with governments to keep the service accessible.
Facebook’s move would also prove popular among those who wanted to stop their location and browsing habits from being tracked. They would still need to log-in, using real-name credentials, to access the site, but Tor would prevent Facebook to find out the user’s location and browsing habits.
Shopping on the Darknet market
A glimpse over what might become the future internet was recently offered by a Zurich- and London-based art collective called !Mediengruppe Bitnik, which created an automated internet bot that crawls along through the site’s offerings.
Armed with US$100 in bitcoins (which is accepted on many licit and illicit marketplaces), the “Random Darknet Shopper” buys every week one item at random on the Darknet’s markets. The illicit goods are then shipped to the Kunsthalle St. Gallen in Switzerland, where they’re exhibited in display cases as cumulative additions to the museum’s exhibition, “The Darknet – From Memes[2] to Onionland.”
So far, the bot has purchased a “stash can” of Sprite that doubles as a hiding place for either drugs or money, a platinum Visa card for $35, 10 Ecstasy Pills from Germany for US$48, 10 packets of Chesterfield cigarettes from Moldova, and other items such as jeans, “designer” bags, and books. It also bought: a pair of Nike Air Yeezy IIs, sold by a vendor unimaginatively named “Fake”; a baseball cap with a hidden camera in its brim; vacuum-sealed ecstasy pills (multiple times); a “decoy first-class letter”, which the vendor, “DoctorNick,” suggests using to “test a new drop address” or “see if your roommates/parents are scrutinizing or opening your mail”; a Lord of the Rings e-book complete collection. One of the most intriguing pieces for the exhibitors has been a fireman’s set of skeleton keys from the United Kingdom. On the Darknet, the keys are advertised as useful for unlocking toolboxes or “gaining access to communal gates and storage areas”.
The motivation for the artwork came in the light of the Snowden revelations, which triggered the interest for internet artists to look at the anonymous and encrypted networks from an artistic point of view. However, the project’s authors also called in the services of a lawyer to shore up their legal position should the bot turn up anything that puts them outside the law.
The artists also gained notoriety by sending a parcel to fugitive whistleblower Julian Assange. The parcel was equipped with a cam that recorded its journey through the postal service to the Ecuadorian Embassy in London.
4. Bitcoin: towards a future universal currency
The British-Swiss “Darknet shopping” art project is but one example of the development of Bitcoin, which is seen as becoming a new kind of global payment network. Like MasterCard or Paypal, it allows money to be transmitted electronically. But Bitcoin is different from these conventional payment networks in two important ways. First, the Bitcoin network is fully decentralized. The MasterCard network is owned and operated by MasterCard Inc. But there is no Bitcoin Inc. Instead, thousands of computers around the world process Bitcoin transactions in a peer-to-peer fashion. Second, MasterCard and PayPal payments are based on conventional currencies such as the US dollar. In contrast, the Bitcoin network has its own unit of value, which is called the bitcoin. The value of one bitcoin fluctuates against other currencies in the same way the euro’s value fluctuates against the dollar. In October 2014, one bitcoin was worth around $400, and all bitcoins in existence were worth around $5 billion.
Bitcoin was created by someone calling himself Satoshi Nakamoto. But no one knows for sure who that is. He introduced the ideas behind Bitcoin in a 2008 paper and launched the Bitcoin network in 2009. The technology began to gain mainstream attention in 2011, when the value of one bitcoin reached parity with the dollar. That same year, Nakamoto stopped actively participating in the Bitcoin community. He turned authority over the Bitcoin software to another developer, Gavin Andresen, who has been the lead Bitcoin developer ever since.
There has been intense interest in Nakamoto’s identity, with a number of people attempting to unmask him. In 2013, a blogger made the case that Nakamoto was computer scientist Nick Szabo. In 2014, Newsweek claimed that Bitcoin was created by a 64-year-old in California who once went by the name Satoshi but now goes by Dorian Nakamoto. But so far no definitive evidence has emerged about the Bitcoin’s creator.
The Bitcoin network is based on the consensus of everyone who participates in it. The rules of the Bitcoin game give everyone on the network an incentive to follow the rules that were established by Bitcoin’s founder Satoshi Nakamoto.
The standard Bitcoin software is an open source project, currently managed by Gavin Andresen, who was nominated for the role by Nakamoto. But not everyone on the Bitcoin network uses this software. Others have developed independent Bitcoin implementations, and people are free to modify the official Bitcoin client for their own purposes. Andresen is an employee of the Bitcoin Foundation, a non-profit organization that is the closest thing Bitcoin has to a public face. But while the foundation often represents the views of the Bitcoin community, it doesn’t have any formal authority over the Bitcoin network.
Bitcoin is both a currency and a payment network, and this fact has caused a lot of confusion. Some of Bitcoin’s most enthusiastic advocates focus on Bitcoin’s potential as a new currency; they see it as a direct challenge to the dollar and the inflationary ways of the Federal Reserve.
The Blockchain technology
The main attractive feature of the Bitcoin is its innovative blockchain, a decentralized ledger that records every verified transaction. Proponents say it is the most secure record-keeping technology ever devised; each piece of information is stored on an immutable time-stamped list, which is then replicated on other servers across the globe. This chain lives on hundreds of machines around the world, which helps ensure protection from corruption, either technological or otherwise.
The term “blockchain” comes from the way information is stored: new transactions are stored on a “block” of data, and each block uses code to refer back to the preceding chunk of information, thereby creating a chain. The blockchain is not only a global ledger that can confirm transactions in about 10 minutes, but it is also transparent and essentially unchangeable.
Every Bitcoin transaction that has ever occurred is listed in the blockchain, and every node (i.e. computer) in the Bitcoin network has its own copy. The blockchain is organized as a list of blocks, each of which contains transactions that occurred during a particular period of time.
When a Bitcoin user wants to make a transaction, the sender, recipient, amount, and other information is announced to the network. Nodes share these announcements in peer-to-peer fashion so that everyone soon knows about them. Nodes examine each transaction to make sure it complies with the rules of the Bitcoin network and ensure no one can spend money they don’t have. Nodes combine all the valid transactions they have heard about into a block which, after verification by the other nodes, is considered to be an official part of the blockchain.
The Bitcoin network’s process for maintaining the blockchain, the shared record of all Bitcoin transactions, accomplishes something that no other payment network has accomplished before: guaranteeing the integrity of the system without a central authority. Before Bitcoin, the only known way to ensure that the owner of a digital coin didn’t defraud the system by spending it twice was for a central authority to approve a transaction before it was considered final. Bitcoin is the first electronic system to solve the double-spending problem in a completely decentralized fashion. If someone tries to spend the same bitcoin twice, the network refuses to add the second transaction to the blockchain.
The probability of solving a block is proportional to computing power, so the Bitcoin network essentially operates on a principle of one computing cycle, one vote. A malicious party who wants to tamper with the blockchain would need to obtain a majority of the network’s computing power. Only then could it can introduce a bogus block and then win the next few computational races to ensure that its block is eventually recognized by the rest of the network. And as the network has grown, that has become very, very difficult to do. Right now, nodes in the Bitcoin network are performing more than 40 thousand million million mathematical operations per second. If you wanted to tamper with the blockchain in order to spend your bitcoins twice, you’d have to acquire more computing power than that.
For the Bitcoin experts, the blockchain technology may be used for a completely decentralized financial exchange. Such a project is already developed in a project named “Medici”, evoking the advent of modern banking in the Renaissance. “Medici” offers a clear value proposition: An equities or bonds exchange that cuts out the middlemen could offer greater efficiencies and cost-savings and allow for the layman to more readily participate in the market.
Though major industry moves have yet to be announced, most in the space said they are confident that the blockchain is already fomenting significant corporate interest, including from the likes of IBM, which has publicly announced it is exploring blockchain as a way to connect an “Internet of things”. Also, the Bank of England and the Union Bank of Switzerland (UBS) have commented recently on the potential of the technology.
On the way to become accepted
The December 2014 announcement of Microsoft that it accepts Bitcoin as a payment option to download digital content came after it added a Bitcoin currency converter to its Bing search engine. Now, Bitcoin can be used to fund a Microsoft Account, allowing owners to purchase content from the Windows and Xbox games, music, and video stores.
The specialists commented that it was a surprise move from Microsoft, which thus became one of the first big tech companies to fully support Bitcoin, beating its rivals Apple and Google. Many Bitcoin enthusiasts were impressed with Microsoft’s support of the virtual currency, which some consider as “pretty huge.” The news triggered the Bitcoin price to leap $20 in less than two hours.
Also in December 2014, the press giant Time Inc. partnered with Coinbase, one of the most popular bitcoin wallets, to become the first major magazine publisher to accept bitcoin payments. The Time Inc. publications Fortune, Health, This Old House and Travel + Leisure announced that they are accepting bitcoin among the wide variety of payment options for subscriptions.
Time Inc. is one of the world’s leading media companies reaching more than 130 million consumers each month across multiple platforms. As of September 30, 2014, it operated approximately 40 websites that collectively have over 100 million average monthly unique visitors around the world.
Time Inc. joins a growing list of businesses, such as Overstock.com, Google and Expedia, that partnered with Coinbase to integrate bitcoin into their customer payment options.
Other examples in 2014 showed a slow but visible trend towards the extension of the bitcoin as a payment means.
– Great Britain abolished VAT on bitcoin trades in March 2014, although it avoided declaring bitcoin a currency. Nonetheless, the UK now treats bitcoin almost identically to normal currencies, and buying bitcoins is no longer subject to VAT.
– In July 2014, the Latvian national airline AirBaltic has become the world’s first airline to accept bitcoins as payment for its services, on its website www.airbaltic.com. The ticket prices on the website are displayed in euros, but when AirBaltic customers pay for their flight, the bitcoins are converted to euros at the current exchange rate. To make this possible, AirBaltic teamed up with Bitpay, a third-party payment processor that converts bitcoins into euros.
– In December 2014, the State of New York ruled that sales tax was not due on purchases of bitcoin, although it avoided explicitly to declare bitcoin a currency.
Bitcoin also shows promise as the world’s first completely open payment network. International money transfers seem like one example. Conventional money-transfer services like Moneygram and Western Union charge high fees while offering rather slow service. These companies are highly profitable and highly resistant to change because it’s extremely expensive to maintain a network of storefronts around the world. Bitcoin could dramatically reduce the barrier to entry for competing with conventional money transfer services. Entrepreneurs in different countries could help customers convert between their local currency and Bitcoin, and then use the Bitcoin network to actually send the money. Small firms would be able to send money anywhere in the world, without worrying about how the recipient would convert Bitcoins back to the local currency. Low barriers to entry could mean more competition, lower prices, and higher quality service.
A more ambitious use for Bitcoin would be to compete with conventional credit card networks. Already, a growing number of merchants have begun accepting Bitcoin payments. However, a lot more work will be needed to build user-friendly services that allow consumers to make Bitcoin payments. But once again, the low barriers to entry for Bitcoin-based services means that lots of people will be trying to figure out how to make Bitcoin payments more accessible to consumers.
Most exciting is the possibility to use Bitcoin to create new types of financial services that don’t exist now.
The future of Bitcoin
The Bitcoin has advanced with great speed, passing through the various growing-up stages along the way: first came miners (creation), then exchanges (buy/sell), next digital wallets (storage) and vaults (more storage). The current phase is moving to medium of exchange (i.e. payment service provision for merchants and setting-up for transactional usage).
The future of the Bitcoin is closely linked to the applications of the blockchain, because if the blockchain is able to securely contain the record of Bitcoin transactions, then it should be able to hold any information with the same benefit.
Such an application would be of use in the case of the so-called “smart loans” that could automatically adjust interest rates based on the financial performance of a borrower. The contract’s code would need to include an automated observation of the key real world metrics, like the rate at which the borrower is paying off the loan. And while most commercial loans already have these provisions, they have to be manually reported and monitored, and enforcement may fall to the discretion of individual agents, or the courts, so the application of this technology would create major efficiencies. The blockchain technology is even seen as theoretically able to remove the potential for parties to have a dispute.
A similar application would include “smart property”” such as a digital asset (or one day even a real-world product like a car) that would turn off if the smart contract is ever breached. The car example, which was cited by several technologists, would likely require something akin to a self-driving feature to return it to a dealership. The blockchain innovation allows the system to work without a trusted third party, theoretically cutting costs.
While dollar volumes at giants like Amazon and eBay are adequately served by existing systems, a major opportunity exists to create the next compelling consumer payment method for the internet-era – which would be the first major innovation in the space since the launch of PayPal in 1999. Bitcoin is the obvious contender. However, from a user’s standpoint it’s still difficult, requiring a log-in process similar to accessing an online bank account. The best way is to build apps with seamless one-click interfaces, which make the consumer experience as straightforward as possible. This is starting to happen, with consumer services mimicking traditional banking apps.
Several developments have the potential to drive Bitcoin higher in 2015: a better usage model, more companies using blockchain-based technology and increased usage for remittances.
If, for the moment, Bitcoin still needs to find its use and numerous players are working a Bitcoin usage model in 2015 that does something better than before, increased usage of blockchain technology is expected to boost Bitcoin’s appeal. Recently, a company which offered a platform and programming language that leverages blockchain technology for contracts and financial transaction pulled around $15 million in a crowd funding campaign. Google and IBM are also reportedly looking to invest in blockchain applications. Remittances constitute another interesting case for Bitcoin, which offers the possibility of sending money nearly instantly anywhere in the world for cheap.
Analysts estimate that wider adoption of the Bitcoin is highly likely, with the highest ever number of Bitcoin transactions in a single day in late November 2014, which surpassed the previous peak set during the 2013 Bitcoin bubble.
5. The social media revolution
The evolution of the Internet became a revolution in the last years, with the modern Internet often being called “Web 2.0”, having as main components the social media and the social web communities.
Today’s social media have helped make real the idea of the “global village”, first introduced by Marshall McLuhan in the 1960s, and increasingly suggest the existence of a “flat world”, where personal computers and the speed of the optic cable in the transfer of information almost removed the limitations of time and space.
Social media’s quick development into an important way to influence society is part of the advancement of information and communication technologies.
The quick development of mobile technology and different mobile terminals has been important for the creation and use of social media. A modern, well-equipped Smartphone can be a pocket-sized mega-studio.
The applications and services of information and communications technology are merging together more and more. The different hardware and services we use now contain a new kind of “intelligence”, where these machines and services communicate with each other without any particular action by the user.
A new set of communication “tools”
Most of us know social media from its different tools and communities. Facebook, MySpace, YouTube, Flickr and Wikipedia are the most famous. The tools of social media – we can also call them “Web 2.0” tools – developed quickly, and new tools, functions, and services are born every day.
Social media is part of the whole body of activity consisting of Internet communications and online interaction. When operations are based on quickly-changing content, linking and sharing, a working “online headquarters” becomes necessary, which requires the creation and maintenance of good and interesting websites. Visitors to different websites should be able to actively follow what new content has been published on the website without actually using the site. Sharing operations like AddThis and ShareThis or RSS feeds offer these possibilities. Sharing options often appear as buttons on websites, making it very easy for users to forward the site’s content. Many websites also carry a Facebook “Like” button which, when clicked, then recommends the site to the clicker’s own friends. On the other hand, RSS feeds keep users informed of site changes, but do not share this information with others.
Blogs have been published since the mid-1990s, when they mostly resembled online personal diaries, and were basically “web log books” from which the word is derived. The main difference to a real diary is that this online version can receive comments, links, and other feedback from readers. What makes blogging an effective information network is the inter-user “blogosphere” that shares links between blogs referring to similar content. Blogs can be tagged using different search terms, they can be listed in blog directories according to name, and each blog entry is another hit on search engines.
Twitter is a free, Internet-based micro-blogging service, on which users can send short, 140-character messages to each other. Its use is based on quick exchanges of thoughts and information between friends, acquaintances, and all the users of the Twitter platform. Twitter messages are most commonly called “tweets”. These tweets form a current of messages that are followed in chronological order from a computer, tablet or Smartphone screen. A sort of keyword called a “hashtag” can be added to tweets to connect the current message to some other message, making it easier to follow the messages. Twitter can also be used to steer the user to more detailed content elsewhere, through web links or other references.
Wikis and similar text-based works of collaboration are web pages that can be modified by anyone who has the right to do so. Wikipedia is the most famous example of all wikis and “wiki-like” works. The basic idea behind wikis is to provide voluntary, decentralized and open information. Text can be added or corrected, and new sections can be added without the need to modify the structure of the entire page. Those who add new information are also the ones checking it. Having many individuals participate in a common task and the chance to take advantage of group intelligence are the greatest strengths of wikis.
YouTube is the Internet’s leading video service. It began operating in 2005, and grew very quickly, with 50 million visits to the site by the end of the same year. In 2010, there were already more than 2 billion visits to YouTube every day. The basic idea behind YouTube is that users upload videos to the site and at the same time, watch and comment on what they see.
Facebook is the Internet’s leading online community. Most consider Facebook as the very image of social media. Its basic idea is to offer each registered user the chance to create a profile with pictures and to keep in touch with their so-called “friends”, or contacts they link to on the site. Facebook wasn’t the first of its kind: similar services already existed in the late 1990s, but the way Harvard university student Mark Zuckerberg linked a person’s photograph and profile to others and created a way to share thoughts, pictures and links was completely new. It was easy for users to adapt to it. Facebook was first available in February 2004 to Harvard students. Within one year, Facebook was used in almost all American schools, and was opened for public use in 2006. The worldwide fascination with Facebook is based on the possibility it offers to be in contact with people whose e-mail addresses and phone numbers have changed or become outdated. But an even more important feature of Facebook is the chance to create networks: Facebook’s activity is based solely on communities. Being on Facebook isn’t just limited to information within a group of friends. Through groups, users can form new networks. A user’s posting, in the form of text, pictures or both, can receive feedback from other users in the form of the “Like” button, and the option to make their own comments. They can also forward the posting to their own Facebook contacts using the “Share” option. One popular feature Facebook supports is community pages for common interests.
Changing the way people communicate
Social media are defined as new information networks and information technologies that make use of a form of communication utilizing interactive and user-produced content, by which interpersonal relationships are created and maintained. Typical social media network services could be content sharing, web communities, and Internet forums. Their common features are:
- Social networking and social interaction
- Participation
- The use of different providers (e.g. search engines, blog spaces, etc.)
- Openness
- Collaboration (between both users and user groups)
Social media’s greatest change to the way people communicate is the user-produced content and the fast and flexible sharing of this content.
One of the main features of the social media is the anonymity of its agents, which means that those who write and comment often use nicknames or aliases. Even though anonymity provides an opportunity to comment on delicate issues, it can also sometimes lead to “flame wars” and avoiding responsibility. Use of the writer’s real name makes the message stand out (for example, in the “Letters to the Editor” in newspapers), since the author wants to be identified as owning that comment. Those who communicate need to be able to appear with their own names.
Another feature is the richness and diversity of information social media provide. Users are no longer dependent on a single source for their news and other data any more, but can flexibly use several different media at the same time. Social media also made possible to combine different kinds of recorded information in very flexible ways. It isn’t just text, pictures, audio, video, and animation, but all of these combined. With today’s compact video cameras, sound recorders, laptop computers and other mobile devices, combined with affordable software, one can easily create and edit impressive presentations.
Social media are also omnipresent, with no isolated places or hiding holes. The private and public lives of society’s most influential figures have merged and become public space. Many politicians had to face the fact that a phrase taken out of context or a joke they told during a private conversation has been recorded by outsiders and quickly made public on the Internet.
Another main feature is speed. News and information are spread more quickly than ever, but the demand for speed can also lead to reports without confirmation. We are too often and too quickly in contact and can’t process new information adequately due to haste.
The lack of a clear hierarchy is another characteristic of social media, due to the multitude of roles that users assume and their relation to each other. An example is the online encyclopedia, Wikipedia, which doesn’t have a main editor, but an army of tens of thousands of writers, inspectors, and editors. So, if inaccuracies are found, to whom at Wikipedia should complaints be directed? The only way to correct this is to supplement the article in question and correct perceived mistakes alone.
Due to the near absence of traditional methods of regulation, a government can attempt to restrict the content of social media, but traditional censorship cannot keep up with ever-changing web pages. China and Saudi Arabia, for example, tightly control their citizens’ use of the Internet and social media. On the other hand, it is technically difficult to interfere in even in the most radical web-distributed propaganda.
The previous features drive to another one, the move from objectivity to subjectivity. An often quoted example, in the United States, was the rumor found across different social media platforms that President Barack Obama is a Muslim. Over 20% of Americans still believe that Obama is a Muslim, even though this false information has been repeatedly refuted.
Enter the Smartphone
One of the most significant events of the technological development process is the convergence of communication and computing for mobile consumer devices, which brings about the interoperability of the services and functions from every industry, with the so-called Smartphone playing the role of a universal mobile terminal. The term refers to the new class of mobile phones that provide integrated services of communication and computing. A Smartphone is a mobile phone with advanced features and functionality, equipped with the capabilities to display photos, play games, play videos, navigation, built-in camera, audio/video playback and recording, send/receive e-mail, built-in apps for social web sites and surf the Web, wireless Internet and much more. Due to those reasons, the Smartphone has become a common choice not only for business users, as it was initially intended, but also for average consumers. Due to its ubiquitous nature and social acceptance one can find Smartphone in educational institutes, hospitals, public places and shopping malls etc.
Today’s Smartphone’s beginning is to be situated around 1993, when an “early Smartphone” was used as an enterprise device, being too expensive for the average consumers. The “Simon” Personal Communicator, a handheld, touchscreen cellular phone and PDA (Personal Digital Assistant), designed and engineered by IBM and assembled under contract by Mitsubishi Electric Corp., was the first cellular phone to include telephone and PDA features in one device.
In 1999, a new device, the “BlackBerry”, which was introduced as a two-way pager in Munich, Germany, became the foundation for the BlackBerry Smartphone that was launched in 2003, supporting email, mobile telephone, text messaging, Internet faxing, Web browsing and other wireless information services.
A new phase of the Smartphone started with the advent of the Apple iPhone, in 2007, which is considered a major breakthrough since the new Smartphone was also destined to the consumer market. At the end of the year, Google launched the Android Operating System with the intention to approach the consumer Smartphone market by introducing features required by the consumer market and at the same time to keep a low cost in order to attract more customers. Features like email, social website integration, audio/video capabilities, internet access, chatting, along with general features of the phone were part of the Android-operated Smartphone.
One of the main features of the Android mobile operating system (OS) is a user interface based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, and a virtual keyboard. Android is currently the most widely used mobile OS and, as of 2013, the highest selling OS overall. As of July 2013, the Google Play store has had over 1 million Android apps published, and over 50 billion apps downloaded. In 2014, Google revealed that there were over 1 billion active monthly Android users, up from 538 million in June 2013. This is also due to the fact that Android’s source code is released by Google under open source licenses, which make Android popular with technology companies which require a ready-made, low-cost and customizable operating system. Android’s open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which add new features for advanced users or bring Android to devices which were previously released running other operating systems.
The effect of smartphones on society
The little applications-based environment on modern Smartphones has made using internet services on a phone less intimidating to the average user. A very important factor is that apps are simple to install and understand for most people.
Internet-based communication is on the rise with apps like WhatsApp, and even voice and video calling is more common place as pretty much all Smartphones are capable of doing so quite easily. Sharing and consuming media content is also easier in this simple app-based environment.
More people are getting into app development as they find it easier to develop simple mobile apps. A specific side effect of Android’s openness is that companies like Samsung started adding stuff like barometers, humidity sensors and ambient temperature sensors to their phone, which allows apps like PressureNET to crowd source atmospheric data to improve the understanding of weather systems in cities and possibly improve forecast techniques.
In the most recent developments, one can see the effort of the main producers to close the gap between the corporate and general consumer of the Smartphone and to improve the display quality, display technology, as well as the stability of the systems by introducing more powerful batteries.
Narcissism without privacy
Social media have changed the virtual and physical landscape in many ways. It has given consumers a collective ‘voice’ in the marketplace and connected people around the globe in remarkable ways. However, there is also considerable evidence about social media negatively affecting people’s lives.
a) Digital exhibitionism and inappropriate self-disclosure have been at the core of every successful app and website. It all begun with MySpace, a directory for wannabe pop stars and DJ’s. Then came Facebook, the encyclopedia of common people. YouTube gave everybody their own TV channel, Blogger and Tumblr made us all creative writers. Twitter brought in tons of followers and LinkedIn positive endorsements – because who cares about our faults? Instagram made “selfie” the word of the year, while Tinder – the ultimate dating tool for narcissists – and Snapchat – the bastion of ephemeral sexting – make Facebook look intellectual. And if your concern is to remain connected after death, there is a whole movement, the digital afterlife industry, dedicated to the preservation of your narcissistic social media activity after you die.
Narcissism levels’ increases pre-date social media but they have exacerbated since their emergence. At the same time, there has been a steep decline in altruism and empathy levels since the advent of Facebook and Twitter. People are more connected than ever, but also less interested in other people, as if being closer to others made us more antisocial. Psychoanalyst Sigmund Freud referred to this as the “hedgehog dilemma”. That is, humans are like hedgehogs in the winter: they need to get close to each other to cope with the cold, but they cannot get too close without hurting each other with their spines.
Scientific studies have shown that the number of status updates, selfies, check-ins, followers and friends, are correlated with narcissism, as is the tendency to accept invites from strangers, particularly when they are attractive. The reason for these correlations is that narcissistic individuals are much more likely to use social media to portray a desirable, albeit unrealistic, self-image, accumulate virtual friends and broadcast their life to an audience. However, the desire to be accepted morphs into a relentless quest for status, which undermines other people and impairs our ability to build and maintain happy relationships and successful careers.
b) Another negative aspect comes from the fact that content distributed on the Internet lives forever and social media have dramatically increased the amount of personal content being created and shared, sometimes virally. Once something personal is posted on a social media site, one can no longer control who sees that picture, post, editorial comment or crazy video. It belongs to the free world, even if the originator instantly tries to recall, delete or otherwise eliminate it. For example, the US Library of Congress archived every ‘tweet’ ever ‘tweeted’ on Twitter, all 170 billion, and they are providing access to that information to other interested parties.
Also, there are companies that have created technology specifically designed to capture information supposedly ‘deleted’ from the internet. One company, Undetweetable, made it possible to view deleted tweets from any Twitter user simply by putting in their user name. This site was soon shut down, but another one, which does the same thing for politicians, is still active. This kind of sites add to the multiple search engines sharing the same content, cache copies and downloaded content, making practically impossible to truly wipe something out.
Such content may have a negative impact upon the persons who posted such content. A recent survey on 6,000 persons who have applied for jobs (16- to 34-year-olds across 6 countries) revealed that more than 10 percent had been turned down for a job because of pictures or comments on their online/social media presence.
c) Perhaps one of the most dangerous features is the amount of private and personal data stored with the social media applications and the fact that the users not only cannot take back but have no control over their use.
Apple’s new watch keeps track of one’s health. Google Now gathers the information needed to compute the ideal time to leave for the airport. Amazon tells the customer which books, groceries, films he/she wants, and sells the tablet necessary to order them and more. The lights turn on when one gets close to home, and the house adjusts to the owner’s choice of ambient temperature.
This may sound great, but every time we add a new device, we give away a little piece of ourselves. We often do this with very little knowledge about who is getting it. Beyond the marketing, the people running these organizations are faceless and nameless. We know little about them, but they sure know a lot about us.
Previously, the vast array of details that defined a person was widely distributed. The bank knew a bit, the doctor knew a bit, the tax authority knew a bit, but they did not all talk to one another. Now Apple and Google know it all and store it in one handy place. That is great for convenience, but not so great if they decide to use that information in ways with which we might not agree.
And we have reason to call into question companies’ judgment in using that data. The backlash to the news that Facebook used people’s news feeds to test whether what they viewed could alter their moods was proof of that. Recently, hackers misappropriated photos sent via Snapchat, a service used primarily by young people that promises auto-deletion of all files upon viewing.
We like new applications and try them out, handing over access to our Facebook or Twitter accounts without much thought about the migration of our personal data from big companies with some modicum of oversight to small companies without rigorous structures and limits.
As we rightly move toward universal Internet access, we need to ask: How much of ourselves are we willing to give away? What happens when sharing becomes mandatory – when giving access to a personal Facebook account is a job requirement, and health services are withheld unless a patient submits their historical Fitbit data?
6. Social media and politics
Since the early 1990s, social media have become a fact of life for civil society worldwide, involving many actors – regular citizens, activists, nongovernmental organizations, telecommunications firms, software providers, governments. As the communications landscape gets denser, more complex, and more participatory, the networked population is gaining greater access to information, more opportunities to engage in public speech, and an enhanced ability to undertake collective action.
Social media have become coordinating tools for nearly all of the world’s political movements, just as most of the world’s authoritarian governments (and, alarmingly, an increasing number of democratic ones) are trying to limit access to it.
Just as Luther adopted the new printing press to protest against the Catholic Church, and the American revolutionaries synchronized their beliefs using the postal service that Benjamin Franklin had designed, today’s dissident movements will use any means possible to frame their views and coordinate their actions. In the same way, parties and politicians make extensive use of the new technologies as vote-winning weapons.
History and case studies
Barack Obama’s 2008 US presidential campaign has often been described as the first electoral campaign in which the use of social media had a decisive impact. The core of the web-based campaign was a well-designed, versatile and dynamic website, “my.barackobama.com”.
But even before that, in 2007, the French centre-right UMP party’s Nicolas Sarkozy scored a decisive victory over the opposing socialist candidate Ségolène Royal for the French presidency. Social media had a strong influence on the outcome of the election: over 40% of Internet users reported that conversations and other activities on the Internet had an effect on their voting decisions.
The first time when social media had helped force out a national political leader was on January 17, 2001, during the impeachment trial of Philippine President Joseph Estrada. When loyalists in the Philippine Congress voted to set aside key evidence against him, in less than two hours thousands of Filipinos converged on a major crossroads in Manila. The protest was arranged, in part, by forwarded text messages reading, “Go 2 EDSA. Wear blk”. The crowd quickly swelled, and in the next few days, over a million people arrived, choking traffic in downtown Manila. The public’s ability to coordinate such a massive and rapid response (close to seven million text messages were sent that week) alarmed the country’s legislators that they reversed course and allowed the evidence to be presented. Estrada blamed “the text-messaging generation” for his downfall.
The Philippine strategy has been adopted many times since. In some cases, the protesters ultimately succeeded, as in Spain in 2004, when demonstrations organized by text messaging led to the quick ouster of Spanish Prime Minister José María Aznar, who had inaccurately blamed the Madrid transit bombings on Basque separatists. The Communist Party lost power in Moldova in 2009 when massive protests coordinated in part by text message, Facebook, and Twitter broke out after an obviously fraudulent election.
Around the world, the Catholic Church has faced lawsuits over its harboring of child rapists, a process that started when The Boston Globe’s 2002 exposé of sexual abuse in the church went viral online in a matter of hours.
In other cases, governments tried to block or hinder communications, but social media have disrupted these restrictive practices. A typical example is the Egyptian Revolution, part of the “Arab Spring” of 2011. In Tahrir Square in Cairo, hundreds of people sent continuous information and updates as text, pictures and video all over the world through the Internet. In this case, experts agree that the average individual has risen to the centre of digital content production, sharing his own knowledge, wisdom and personal experiences with his peers.
According to Alec Ross, Hillary Clinton’s senior adviser for innovation at the US state department in 2011, the internet became “the Che Guevara of the 21st century” in the Arab Spring uprisings. In his vision, the social media networks disrupt the exercise of power and devolve power from the nation state – from governments and large institutions – to individuals and small institutions. Alec Ross, who also helped co-ordinate Barack Obama’s 2008 election campaign, also stressed the “lightning fast change” triggered by the internet communication.
Case study: 2010 elections in Great Britain
In Great Britain’s 2010 elections, Conservative candidate David Cameron made a successful use of a “new media” team that was permanently at work, alerting tens of thousands of followers through instant updates on Facebook and Twitter, giving them an internet link to the full speeches so they could download it on their laptops, BlackBerrys or mobile phones. Political bloggers were briefed before they poured their instant analysis on to the web. Film was prepared for YouTube. The key messages were sent hours before the day’s main TV news bulletins.
It was totally different from the previous election, in May 2005, when social networking sites were known to few. Facebook was largely unheard of and Twitter had yet to be invented. YouTube had been in existence for only three months. Blogs were in their infancy and political bloggers, now hugely influential in the flow of news, had yet to evolve. All parties used email, but beyond that the internet remained undeveloped as a campaigning tool.
Following the example set by Barack Obama’s team in 2008, the British election in 2010 showed how much the world has changed and how susceptible election outcomes are to the unpredictability of events online. The ways the main parties used technology to get their messages across to the widest possible audiences was totally new and different. The main UK parties were devoting their attention on to how to turn in their favor the internet’s power, reach and speed almost in the same measure as to actual policy.
According to Blue State Digital, the online campaign consultancy firm which provided the technology that powered the Obama 2008 and UK’s 2010 campaigns, the new media campaigns matter especially in the situations when every single vote matters, since the party that masters new media may have the decisive edge.
All the main parties in 2010 tried to imitate the 2008 Obama campaign by mobilizing vast new armies of supporters via the internet and social networking sites. The main aim was that their supporters talk to their friends about their political enthusiasms and to learn the US lesson that people are more likely to be influenced by those they are acquainted with than by politicians, newspapers or experts.
Ayatollah’s Facebook and Tweeter presence
At the end of 2012, Iran’s supreme leader, Ayatollah Ali Khamenei, launched his own Facebook page, despite the social networking site being officially banned under the Islamic Republic’s restrictive internet censorship policy.
The page, Khamenei.ir, which has notched up nearly 10,000 “likes” since its creation on December 13, 2012, followed the establishment of a highly-active website and a personal Twitter account. The page has been publicized by a Twitter account of the same name that Iran experts believe is run by Khamenei’s office.
Khamenei’s Facebook page shares a similar tone, style and content with accounts devoted to disseminating Khamenei’s message on Twitter and Instagram and to the website www.khamenei.ir, an official website published in 13 languages.
Experts said the social media accounts showed that Iran, despite restricting access to such sites inside the country, was keen to use them to spread its world view to a global audience. Social media gives the regime leadership another medium of communication that can be used to share their message with a younger and far more international demographic.
In November 2014, in the context of U.S. President Barack Obama’s conciliatory letter to Ayatollah Khamenei, and reportedly “inconclusive” direct US-EU-Iran nuclear talks, Khamenei tweeted and Facebooked to the world that the United States is the “only nuclear #criminal in the world”. He went on to explain: “that is to say the U.S., that has attacked the oppressed people of #Japan with #atomic bombs, is falsely claiming to fight the proliferation of #NuclearWeapons”. Khamenei’s Facebook-post explanation was posted with an imaginative “infographic” which, signed directly in the name of “Ayatollah Khamenei,” compares the “American view of Nuclear energy” – symbolized by a nuclear bomb mushroom cloud – symmetrically matched with the “Islamic view of Nuclear Energy,” which Khamenei symbolizes with a serene tree and a mountain.
Also in November 2014, Khamenei’s official Twitter account mentioned that Iran has presented international communities with “a practical and logical mechanism” aimed at achieving the elimination of the state of Israel. The “proper way” to achieve this would be through a referendum that would encompass “all the original people of Palestine including Muslims, Christians and Jews wherever they are”. The post stipulates that the referendum would instate a government in Palestine, but does not specify how. This government will then decide whether the “Jewish immigrants who have been persuaded into emigration to Palestine” can remain in the country or should return to “their home countries”. While Israel itself is not expected to accept the proposal, Khamenei believes that the “fair and logical plan” can be “properly understood by global public opinion and can enjoy the supports of the independent nations and governments”.
No substitute but fostering human interaction
There are two things at which social media are extremely good: information sharing and co-ordination. However, neither of these advances provides a substitute for face-to-face interaction in confronting – and ultimately transforming – the structures of social, political and economic power.
Information sharing has been the primary achievement of social media to date. Nothing stays hidden for long. Uploading self-authored content, whether instant news footage or opinion or competing data, has fundamentally changed the way politics works. Governments can no longer hope to hide inconvenient facts or control their own narrative when there are so many sources of alternative information and analysis available.
Coordination has been the second big success – getting people to act in unison whether to rally together to oppose government oppression, send mass emails, sign digital petitions or fund activism. States have tried to keep pace with these advances – they spy prodigiously online, monitor and infiltrate activist groups, and arrest bloggers. They even suspend or ban online platforms. But surely no-one can say that the global public sphere will ever be the same again.
However, according to Teddy Goff, who directed President Barack Obama’s 2008 (which is considered the first social media campaign) and 2012 digital campaigns, while technology has radically changed the way campaigns are conducted, the average voter still wants to be listened to and respected.
During the 2012 campaign, Goff managed a 250-person team responsible for social media, email, web, online advertising, design and video. He stressed that digital communications had come a long way since Obama’s 2008 campaign, when the platforms for social media and mobile communications were still in their infancy.
Under Goff’s leadership, the 2012 campaign raised more than $690 million via the Internet, and registered more than one million voters online. Goff’s team also built a Facebook following of more than 45 million people and a Twitter following of more than 33 million.
Also, according to Teddy Goff, democracy has undergone a fundamental change in the way voters participate in today’s political campaigns, meaning that people have the ability to reply back to a politician and expect to receive feedback.
One of the far-reaching changes brought by the new media is their power to generate mass activism. While blogs and politicians twittering get most attention, new form of organizing supporters is happening. By energizing people, and then giving them the tools to get involved and become advocates for a party, thousands of people are talking to volunteers. The desire to control from the centre has been replaced by the need to reach as many people as possible, even if that involves risks. The new tools that online campaigning give means that more and more people can become closely involved in campaigns.
The internet brings opportunities but also dangers, since it introduces a greater unpredictability of the outcome. Moreover, if politicians are able “to talk directly to voters” through podcasts and blogs, bypassing the traditional media, other new forces online will complicate the information flow, including the army of new political bloggers who can wreck a party’s best-laid plans. All are aware of the internet’s ability to capture the so-called “gotcha moments” – blunders or controversial statements caught on film and then broadcast to millions, with devastating effect, on outlets such as YouTube.
7. Power to (control and manipulate) the people
Among the fears about the lack of privacy brought in the open as the new technologies become more and more a part of everybody’s life is their potential for surveillance. More than a decade ago, it was suggested that everyday life computing is “the dream of spooks and spies – a bug in every object”. Given the recent revelations about the NSA monitoring, it is no wonder that analysts are more and more making use of the already familiar adjective “Orwellian”.
Universal control
As part of the 2013 former NSA contractor Edward Snowden’s leaks, it was revealed, mainly by the British newspaper The Guardian, that the U.S. National Security Agency (NSA) and the British Government Communications Headquarters (GCHQ) are able to access user data stored on Android, iPhone and BlackBerry devices, including SMS, location, emails and notes.
Alleged NSA internal slides included in the Snowden disclosures purported to show that the NSA could unilaterally access data and perform “extensive, in-depth surveillance on live communications and stored information”, with examples including email, video and voice chat, videos, photos, voice-over-IP chats (such as Skype), file transfers, and social networking details.
Snowden summarized that “in general, the reality is this: if an NSA, FBI, CIA, DIA, etc. analyst has access to query raw SIGINT (signals intelligence) databases, they can enter and get results for anything they want”.
In June 2013, “The Guardian” broke news of the secret collection of telephone records held by the Barack Obama administration and subsequently revealed the existence of the PRISM surveillance program.
The newspaper was subsequently contacted by the British government’s Cabinet Secretary, Sir Jeremy Heywood, under instruction from the Prime Minister David Cameron and Deputy Prime Minister Nick Clegg, who ordered that the hard drives containing the information must be destroyed. The Guardian’s offices were then visited in July by agents of GCHQ, who supervised the destruction of the hard drives containing information acquired from Snowden. In June 2014, The Register reported that the information the government sought to suppress by destroying the hard drives related to the location of a “beyond top secret” internet monitoring base in Seeb, Oman and the close involvement of some British telecommunication groups in intercepting internet communications.
The documents also revealed a further effort by the intelligence agencies to intercept Google Maps searches and queries submitted from Android and other Smartphones to collect location information in bulk.
According to The Guardian, NSA had access to chats and emails on Hotmail.com, Skype, because Microsoft had “developed a surveillance capability to deal” with the interception of chats, and “Prism collection against Microsoft email services will be unaffected because Prism collects this data prior to encryption”.
Further reports in January 2014 revealed the intelligence agencies’ capabilities to intercept the personal information transmitted across the Internet by social networks and other popular apps, which collect personal information of their users for advertising and other commercial reasons.
According to another report recently published by the Wall Street Journal, U.S. law enforcement agencies are using spy planes equipped with military-grade snooping technology to obtain information from millions of Smartphones in the U.S. The Cessna planes are equipped with “dirtboxes” that are used to mimic mobile phone towers, helping track criminals but also recording innocent citizens’ data.
Sources familiar with the program said that it began in 2007 and is currently operating from five metropolitan airports in the U.S. Between them the planes have a flying range covering the majority of the U.S. population. The technology being used is similar to known method called stingray. Both stingray devices and “dirtboxes” use off-the-shelf components to collect mobiles’ International Mobile Subscriber Identity (IMSI), an identifying code unique to each device. They can be used to track individuals’ movements via their mobiles but work indiscriminately, hovering up information from a general area. A number of different U.S. agencies, including the FBI, Drug Enforcement Agency, Secret Service, Army, Navy and Marshals are known to use stingrays, and security experts say that these capabilities became globalized and democratized.
Manipulation experiments
Other documents retrieved from the so-called Snowden archive revealed how western intelligence agencies are attempting to manipulate and control online discourse using tactics of deception and reputation-destruction. These agencies are attempting to control, infiltrate, manipulate, and warp online discourse, with the risk of compromising the integrity of the internet itself.
JTRIG and HSOC
Such actions were developed within a unit of the British Government Communications Headquarters (GCHQ) called Joint Threat Research Intelligence Group (JTRIG). Several JTRIG classified documents retrieved from the Snowden archive were shared by GCHQ to the U.S.’ NSA and the other three partners of the “Five Eyes Alliance” (Australia, Canada and New Zealand), among which one titled The Art of Deception: Training for Online Covert Operations. According to the Snowden archive documents, JTRIG’s mission is to make use of the online techniques to make something happen in the real or cyber world.
Among the core purposes of JTRIG are two tactics: (1) to inject false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable.
GCHQ developed its own software tools to infiltrate the internet to shape what people see, with the ability to rig online polls, increase page view counts on specific websites, and psychologically manipulate people on social media. The same toolkit enabled the spy agency to censor video content that it judged to be “extremist”. The tools were developed by JTRIG and are listed in a catalogue format, enabling other GCHQ departments to see what tools have been developed or were undergoing development.
Among the tools listed by JTRIG were the following (codenames in capitals):
– ‘Change outcome of online polls’ – UNDERPASS
– ‘Find private photographs of targets on Facebook’ – SPRING BISHOP
– ‘Active Skype capability. Provision of real-time call records (SkypeOut and SkypetoSkype) and bidirectional instant messaging. Also contact lists.’ – MINITURE HERO
– ‘A tool that will permanently disable a target’s account on their computer’ ANGRY PIRATE
– ‘Ability to artificially increase traffic to a website’ – GATEWAY
– ‘Ability to inflate page views on websites’ – SLIPSTREAM
– ‘Amplification of a given message, normally video, on popular multimedia websites (Youtube)’ – GESTATOR
– ‘Allows batch Nmap scanning over TOR’ – SILVER SPECTER
– ‘Targeted Denial Of Service against Web Servers’ – PREDATORS FACE
– ‘Distributed denial of service using P2P. Built by ICTR, deployed by JTRIG’ – ROLLING THUNDER.
Several other collection techniques involve the gathering of data from social media sites including Bebo, Google+, Twitter, LinkedIn and Facebook. Some of the tactics are “in development”, while others are “fully operational, tested and reliable”.
The documents also revealed the monitoring of YouTube and Blogger, the targeting of Anonymous with attacks similar to those of the “hacktivists”, the use of “honey traps” (luring people into compromising situations using sex) and destructive viruses. Among the tactics used there are “false flag operations” (posting material to the internet and falsely attributing it to someone else), fake victim blog posts (pretending to be a victim of the individual whose reputation they want to destroy), and posting “negative information” on various forums.
The JTRIG targets extend beyond the customary roster of normal spycraft: hostile nations and their leaders, military agencies, and intelligence services. In fact, the discussion of many of these techniques occurs in the context of using them in replacement of the “traditional law enforcement” against people suspected but not charged or convicted of ordinary crimes or, more broadly still, “hacktivism”, meaning those who use online protest activity for political ends.
Another group of the GCHQ is called the “Human Science Operations Cell” (HSOC) and is devoted to “online human intelligence” and “strategic influence and disruption”. It makes use of psychology and other social sciences to not only understand, but shape and control, how online activism and discourse unfolds. The documents disclosed lay out theories of how humans interact with one another, particularly online, and then attempt to identify ways to influence the outcomes.
Another project disclosed in the Snowden archive is called “Royal Concierge”, under which the British agency intercepts email confirmations of hotel reservations to enable it to subject hotel guests to electronic monitoring. It also contemplates how to “influence the hotel choice” of travelers and to determine whether they stay at “SIGINT friendly” hotels. The document asks: “Can we influence the hotel choice? Can we cancel their visit?”
Several other sources confirmed that the “Royal Concierge” program has been implemented and extensively used. The German magazine Der Spiegel reported that “for more than three years, GCHQ has had a system to automatically monitor hotel bookings of at least 350 upscale hotels around the world in order to target, search, and analyze reservations to detect diplomats and government officials”. Also, NBC reported that “the intelligence agency uses the information to spy on human targets through close access technical operations, which can include listening in on telephone calls and tapping hotel computers as well as sending intelligence officers to observe the targets in person at the hotels.”
NSA and DARPA
While the GCHQ documents were the first to prove that a western government is using controversial techniques to disseminate deception online and harm the reputations of targets, government plans to monitor and influence internet communications and covertly infiltrate online communities in order to sow dissension and disseminate false information have long been the source of speculation.
In a paper released in 2008, Harvard Law Professor Cass Sunstein, a close Obama adviser and the White House’s former head of the Office of Information and Regulatory Affairs, proposed that the US government employ teams of covert agents and pseudo-independent advocates to “cognitively infiltrate” online groups and websites, as well as other activist groups.
Sunstein also proposed sending covert agents into “chat rooms, online social networks, or even real-space groups” which spread what he views as false and damaging “conspiracy theories” about the government. Ironically, the same Sunstein was later named by President Obama to serve as a member of the NSA review panel created by the White House.
In 2011, the activities of users of Twitter and other social media services were recorded and analyzed as part of a major project funded by the US military. Research funded directly or indirectly by the US Department of Defense’s military research department, known as Darpa, has involved users of some of the internet’s largest destinations, including Facebook, Twitter, Pinterest and Kickstarter, for studies of social connections and how messages spread.
Darpa, established in 1958, is responsible for technological research for the US military. Its notable successes have included no less than Arpanet, the precursor to today’s internet, as well as numerous other innovations, including the “onion routing”, which powers anonymizing technologies like Tor.
Unveiled in 2011, the Social Media in Strategic Communication (SMISC) program was regarded as a bid by the US military to become better at both detecting and conducting propaganda campaigns on social media.
On the webpage where it has published links to the papers, Darpa states the general goal of the SMISC program is “to develop a new science of social networks built on an emerging technology base”. According to Darpa, the program seeks to “develop tools to support the efforts of human operators to counter misinformation or deception campaigns with truthful information”.
Of the funding provided by Darpa, $8.9m has been channeled through IBM to a range of academic researchers and others. A further $9.6m has gone through academic hubs like Georgia Tech and Indiana University.
Several of the DoD-funded studies went further than merely monitoring what users were communicating on their own, instead messaging unwitting participants in order to track and study how they responded.
The project list included a study of how activists with the Occupy movement used Twitter, as well as a range of research on tracking internet memes and some about understanding how influence behavior (liking, following, retweeting) happens on a range of popular social media platforms like Pinterest, Twitter, Kickstarter, Digg and Reddit. Other studies which received military funding channeled through IBM included one called “Modeling User Attitude toward Controversial Topics in Online Social Media”, which analyzed Twitter users’ opinions on fracking.
Several other studies related to the automatic assessment of how well different people in social networks knew one another, through analyzing frequency, tone and type of interaction between different users. Such research could have applications in the automated analysis of bulk surveillance metadata, including the controversial collection of US citizens’ phone metadata revealed by Edward Snowden.
Example taken by Facebook
Facebook, the world’s biggest social networking site, was itself involved in a manipulation experiment in 2012, in a study which involved secret psychological tests on nearly 700,000 users. The experiment, which resulted in a scientific paper published in the March issue of the Proceedings of the National Academy of Sciences, hid “a small percentage” of emotional words from peoples’ news feeds, without their knowledge, to test what effect that had on the statuses or “likes” that they then posted or reacted to. The experiment led to the conclusion that people may be made to feel more positive or negative through a process of “emotional contagion” through the social network.
During the tests, Facebook filtered users’ news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users’ exposure to their friends’ ‘positive emotional content’, resulting in fewer positive posts of their own. Another test reduced exposure to ‘negative emotional content’ and the opposite happened. The study concluded: “Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks”.
Facebook was also involved in at least one other military-funded social media research project, according to the records published by Darpa. The research was carried by Xuanhuai Wang, an engineering manager at Facebook, as well as Yi Chang, a lead scientist at Yahoo labs, and others based at the Universities of Michigan and Southern California. The project related to how users understood and consumed information on Twitter.
8. Conclusion
Many researchers are already considering the profound consequences of the new technologies’ development and interaction with human life and society. In many researchers’ view, man’s relationships with computers may come to feel more like a companionship than using a device: a lifelong conversation with invisible systems that know (too?) many things about us, more intimately than most of the people. Google researchers, for example, have spoken about the idea of an “intelligent cloud” that answers one’s questions directly, adapted to match its increasingly intimate knowledge about the person and everybody else. Where is the best restaurant nearby? How do I get here? Why should I buy that?
This “invisibility” generates troubling questions. One of them is the risk that we might be too quick to trust the information provided by the machines, or too willing to take their models of the world for the real thing. As motorists already know to their cost, even navigation software suggestions can be hopelessly wrong.
As computers slip ever further beneath human awareness, it is important that we continue to ask certain questions. What should an unseen machine be permitted to hear and see of our own, and others’, lives? Can we trust what they tell us? And how do we switch them off?
In the whole society’s case, other questions, more troubling, appear in connection with the controlling and manipulating power of the new technologies. What is the right balance between the citizen’s right to privacy and the state’s obligation to keep him/her safe? Who is equipped to determine how the government should use a person’s data? Where is the organization intellectually and financially equipped to protect the interests of citizens and sites that exploit and commercialize personal data?
These are only some of the main questions that begin to be asked about the new technologies’ age, as more and more people feel the necessity of a „technology philosopher in chief”, before the technology runs away with itself.
For the moment, there is only the faint glimpse of hope expressed by Paul Sylverson, one of the creators of Tor: “When you create a technology, it’s a tool that anybody can use for good or ill. To some extent, you have to trust society broadly to do good things”.
[1] See: Press Briefing by General Scaparrotti in the Pentagon Briefing Room, October 24, 2014,http://www.defense.gov/Transcripts/Transcript.aspx?TranscriptID=5525
[2]An Internet meme is an activity, concept, catchphrase or piece of media which spreads, often as mimicry, from person to person via the Internet. An Internet meme may take the form of an image, hyperlink, video, picture website or hashtag. These small movements tend to spread from person to person via social networks, blogs, direct email, or news sources. The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene, as an attempt to explain the way cultural information spreads; Internet memes are a subset of this general meme concept specific to the culture and environment of the Internet.