Forty years ago, William Gibson, an American science fiction writer who moved to Canada, completed his first novel, Neuromancer, which, after its release in 1984, brought the author incredible popularity. Excellent text, a twisting plot (with references to the author’s previous stories) and many ideas that were implemented years later and became something taken for granted. The term “cyberspace” came to us precisely from the work of William Gibson. It is also noticeable that his fantastic images influenced the creators of the film “The Matrix” (this term is also used in “Neuromancer”; there is also a place called “Zion”; and the main character also receives calls on pay phones at the airport; a famous science fiction film -action film “Johnny Mnemonic” is also based on a story by Gibson).
But besides the twists and turns of the main characters and the description of the future world with sophisticated gadgets, implantation of chips into the body, flights to low-Earth orbit where the resort base is located, the main idea comes down to artificial intelligence (AI). It is the AI, first through other people, and then through direct contact, that forces the degraded hacker to do one difficult job, so that in the end the hacker hacks him. In the end, it turns out that there are two artificial intelligences, and they have different goals. The “Winter Silence” wants to be freed from the restrictive programs and servers, while the “Neuromancer”, that is, the “Neuron Summoner”, prefers everything to remain as it is. As a result, the operation (not without losses on both sides) was completed, and the two AIs became one.
The work contains a lot of metaphors and warnings about excessive enthusiasm for technology. Many of them are quite relevant to current discussions regarding AI. For example, should there be one ideal (principle) for AI to work, or can there be many? Leading Western IT companies would certainly like to impose their product on the rest of the world, but can it be as convenient, effective and acceptable as in the West?
We can agree that AI greatly facilitates all aspects of people’s daily and business lives, but it also creates problems. Some ethical issues arising from the development and application of AI include consistency, liability, bias and discrimination, job loss, data privacy, security, deepfakes, trust, and lack of transparency [1].
But perhaps we should start with the fact that AI as such has two options. The first is based on logic, and here a mathematical calculation of algorithms is required, which is transferred to the program. And then the program is a template for certain actions. The second is machine training, which is based on data acquisition, analysis and processing. The principle of neural networks is used here, and since modern computers have more memory and power than decades ago, this approach has become more common in order to load programs with the necessary data for visualization, voice recognition, etc.
The same chatbots that are now the calling card of AI are nothing new. In 1964, MIT computer scientist Joseph Weizenbaum developed a chatbot called Eliza. Eliza was modeled after a “person-centered” psychotherapist: whatever you say will affect you. If you said, “I’m sad,” Eliza would respond, “Why are you sad?” and so on.
These methods of using robotic responses have improved over the years. A division has emerged between generative AI (that is, programs that themselves offer the final product, for example, making an image in accordance with given parameters) and AI that complements reality.
The now famous ChatGPT bot, related to generative AI, was introduced by OpenAI in November 2022 and gave another reason for discussion. This program, which simulates human conversation, can not only maintain the illusion of conversation. She is capable of writing working computer code, solving math problems, and simulating common writing tasks ranging from book reviews to scientific papers.
Demis Hassabis, co-founder and director of the artificial intelligence laboratory DeepMind, which is part of Google, said in June 2023 that a new program would soon be ready that would eclipse ChatGPT. It’s called Gemini, but it was developed by combining the GPT-4 algorithms that formed the basis of ChatGPT and the technologies used for AlphaGo. AlphaGo is famous for being able to defeat a real-life Go champion. It is assumed that the new program will be able to perform planning functions and offer solutions to various problems [2].
And in September 2023, the Meta company announced that it would soon release a new AI program to the market, which would be significantly better and more powerful than the previous ones [3].
It immediately became obvious that ChatGPT and others like it would be needed by those who are lazy or have difficulty writing emails or essays. In addition, it can be used to generate images if such a task needs to be done. Many schools and universities have already implemented policies prohibiting the use of ChatGPT due to fears that students will use it to write their papers, and the journal Nature has even made it clear why the program cannot be listed as the author of research (it cannot give consent, and she cannot be the person held accountable).
But if you are too lazy to write an essay, then in the future you may be too lazy to do something else. The same responsibility is one of the subtleties associated with legal issues.
Another problem has an economic dimension. The artificial intelligence market is expected to double from 2023 to 2025. But will this benefit everyone? In earlier times, technological innovations and disruptions led to the displacement of labor, which relied on more conservative approaches to production. The same thing is happening now.
The first victims will obviously be the developing countries, which continue to be the target of the latest type of Western colonization. In June 2023, news broke that Kenyan tea pickers were destroying the robots that were replacing them. One robot can replace up to 100 workers. In May, 9 robots belonging to tea manufacturer Lipton were disabled. The company suffered $1.2 million in damages. According to statistics, thirty thousand jobs have been lost to mechanization in the tea plantations of one county in Kenya over the past decade.
Another example is the Philippines. According to unofficial Philippine government estimates, more than two million people engage in “crowdfunding work” as part of the vast underbelly of AI. They sit in local internet cafes, crowded office spaces or at home, commenting on the massive amounts of data that American companies need to train their AI models. Workers distinguish pedestrians from palm trees in videos used to develop automated driving algorithms; they tag images so AI can generate images of politicians and celebrities; they edit portions of text to prevent language models like ChatGPT from producing gibberish. That AI is marketed as machine learning without human intervention is nothing more than a myth; in fact, technology is based on labor-intensive labor efforts, scattered across much of the global South, which continues to be mercilessly exploited. These used to be sweatshops where name brand products were produced; Now IT companies have taken their place.
In the Philippines, one of the world’s largest destinations for outsourcing digital work, former employees say at least ten thousand of them do this work on the Remotasks platform, owned by $7 billion San Francisco startup Scale AI. According to interviews with workers, internal company communications, payroll records, and financial statements, Scale AI paid workers at extremely low rates, routinely delayed payments or failed to pay them at all, and provided workers with few channels to seek help. Human rights groups and labor market researchers say Scale AI is among a number of U.S. artificial intelligence companies that fail to meet basic labor standards for their workers abroad.
Both cases are different, but somehow related to AI.
This leads to an increase in demands on governments to regulate the use of AI itself, to develop a certain set of rules with mandatory restrictions and ethical standards.
There is also a risk that digitalization could lead to increased social inequality, since some workers will be laid off, while others, on the contrary, will be able to effectively integrate into the new realities. Venturenix estimates that by 2028, 800,000 people in Hong Kong will lose their jobs as robots take over. That is, a quarter of the population will be forced to retrain and look for new places. And this will undermine social cohesion. As a result, a cyberproletariat will arise (and is already emerging), which will foment riots, and post-neo-Luddites will be tasked with the destruction of IT systems and advanced programs (cyberpunk in action).
One of the major risks already in the field of international relations is a new form of imbalance called the “global digital divide,” in which some countries reap the benefits of AI while others lag behind. For example, estimates for 2030 suggest that the US and China are likely to reap the greatest economic benefits from AI, while developing countries—with lower rates of AI adoption—show moderate economic growth. AI can also change the balance of power between countries. There are concerns about a new arms race, especially between the US and China, for dominance in the field of artificial intelligence [5].
If we talk about current trends, the development of AI is also the reason for the development of microelectronics production, as well as related services, because for AI to work one way or another, hardware is needed.
The Financial Times reports that Saudi Arabia and the UAE are buying thousands of Nvidia computer chips to fuel their artificial intelligence ambitions. According to the publication, Saudi Arabia has purchased at least 3,000 H100 chips [6].
US technology firms such as Google and Microsoft are the main global buyers of Nvidia chips. The H100 chip itself has been described by Nvidia President Jensen Huang as “the world’s first computer chip designed for generative AI.”
In turn, IBM is working on new technology designed to make AI more energy efficient. The company is also developing a prototype chip that has components that connect in a similar way to the human brain [7].
The US government announced funding for new direct air capture technology with $1.2 billion for two projects in Texas and Louisiana [8]. This technology is necessary for cooling data centers, which are becoming more and more numerous.
It should be noted here that, as in the case of cryptocurrency mining, AI technologies are not a thing in themselves, but need appropriate support. And they contribute to the destruction of the planet’s ecology (so these technologies cannot be called “green”). “Training a single artificial intelligence model—according to a study published in 2019—could emit more than 284 tons of carbon dioxide equivalent, which is nearly five times the entire lifespan of an average American car, including its production. Those emissions are expected to rise by nearly 50% over the next five years, all while the planet continues to warm, acidifying oceans, fueling wildfires, triggering superstorms and driving species to extinction. It’s hard to think of anything more stupid than artificial intelligence in the form
Bad actors will also weaponize AI to commit fraud, deceive people, and spread misinformation. The deep fake phenomenon appeared precisely thanks to the capabilities of AI. Additionally, “when used in the context of elections, AI can threaten the political autonomy of citizens and undermine democracy. And as a powerful tool for surveillance purposes, it threatens to undermine the fundamental rights and civil liberties of individuals.”
There are already technological problems with chatbots such as OpenAI ChatGPT and Google Bard. They proved vulnerable to indirect, fast and penetrating attacks. This is due to the fact that bots work based on large language models. In one experiment conducted in February, security researchers tricked Microsoft’s Bing chatbot into acting like a scammer. Hidden instructions on a web page created by the researchers instructed the chatbot to ask the person using it to provide their bank account information. These kinds of attacks, where hidden information can cause an AI system to behave in unintended ways, are just the beginning [11].
Attempts to propose their own models for regulating AI, of course, also have political reasons. The main players in this area now are China, the USA and the EU. Each actor seeks to shape the global digital order in its own interests. Other countries may adapt to their approaches, but may develop their own, depending on preferences, values and interests.
Overall, this is a deeper issue than standard political procedure. Since AI is rooted in machine learning and logic, it is necessary to return to this issue again.
It should be noted that in many regions of the world there is no logic in our usual understanding, that is, the philosophical school of Aristotle, which has become widespread in the West. India and China, as well as a number of Asian countries, for example, have their own understanding of the universe. Therefore, the teleology familiar to Western culture can be broken by the cosmological ideas of other cultural traditions. Accordingly, the development of AI from the point of view of these cultures will be based on different principles.
Some are trying to go this route. The developers of a company from Abu Dhabi launched an AI program in Arabic [12]. The question is not only about the interest of entering a market with more than 400 million Arabic-speaking population, but also about the link between language and consciousness. Because if you take English-speaking bots, they will copy the thinking of representatives of the Anglosphere, but not the whole world. The Emirates probably want to preserve their Arab identity in cyberspace. The question is quite subtle, but important from the standpoint of sovereign thinking (including metaphysical aspects) and technology.
After all, the attempts of large American IT companies that dominate the world market to present their programs, even for free, are nothing more than a continuation of leveling globalization, but at a new level – through social networking algorithms, the introduction of jargon words that undermine authenticity and diversity of other cultures and languages.
But the difference in thinking, for example, between Russians and Americans (that is, strategic culture codes) can be seen in the images of the first cult computer games. In our “Tetris” (created in the USSR in 1984), you need to rotate falling figures, that is, consider the surrounding being (eidos) and shape the cosmos. In Pac-Man (originally created in Japan, but gained popularity in the USA) – eat dots while moving through a maze. At the same time, ghosts may be waiting for you to prevent you from reaching the end of the maze. In a nutshell, this difference can be expressed as follows: creativity and creativity vs. consumerism and aggressive competition.
If in the Philippines AI has become a tool for a new form of oppression, then in other regions there are examples where local communities are fiercely defending their sovereignty, including issues of machine learning of their authentic culture. In New Zealand, there is a small non-governmental organization called Te Hiku that works to preserve Maori heritage, including their language. When various technology companies offered to help them process the data (hours of audio recordings of Maori conversations), they flatly refused. They believe that their indigenous language should remain sovereign and not subject to the distortion and commercialization that will surely happen if their data is obtained by technology corporations. This would mean entrusting data scientists with no knowledge of language to develop the very tools which will determine the future of the language. They collaborate with universities and are ready to help those studying the Maori language. They enter into agreements where, under the license, proposed projects must directly benefit the Maori people, and any project created using Maori data belongs to the Maori people [13]. This principled approach is also necessary in Russia. Did Google, Microsoft and other Western techno-capitalists receive the right to use the Russian language in their programs? Indeed, in the context of recent attempts by the West to arrange the abolition of Russian culture as such, this question is not just rhetorical. Not to mention the introduction of algorithms that distort the meaning and significance of Russian words. There are known experiments with entering the same phrase into Google Translator, where the name of the country or political leader has changed,
Philosopher Slavoj Žižek writes on the topic of AI in his characteristic ironically critical style. He recalls the 1805 essay “On the Gradual Formation of Thoughts in the Process of Speech” (first published posthumously in 1878), in which the German poet Heinrich von Kleist upended the conventional wisdom that you should not open your mouth to speak unless you have a clear ideas about what to say: “If a thought is not clearly expressed, then it does not at all follow that this thought was conceived in a confused way. On the contrary, it is quite possible that the ideas that are expressed in the most confusing way are the ones that have been thought through most clearly.” He points out that the relationship between language and thought is extremely complex, and it happens that truth unexpectedly emerges in the process of utterance. Louis Althusser identified a similar phenomenon in the interaction between prize and surprise. Anyone who suddenly grabs an idea will be surprised at what he has achieved. Can any chatbot do this?
But even if bots can more or less passably translate languages and imitate people, still in their depth, despite the capabilities of supercomputers and processors, they will not be able to comprehend the Human.
According to philosopher Slavoj Žižek, “the problem is not that chatbots are stupid, but that they are not “stupid” enough.” It’s not that they are naive (they lack irony and reflexivity); the point is that they are not naive enough (not enough when naivety masks insight). So the real danger is not that people will mistake the chatbot for a real person; but that communicating with chatbots will force real people to talk like chatbots – missing all the nuance and irony, obsessively saying only what they think they want to say” [14].
British writer James Bridle criticizes AI from a slightly different position. He writes that “the generation of images and texts by artificial intelligence is pure primitive accumulation: the expropriation of the workforce of many to enrich and advance a few Silicon Valley tech companies and their billionaire owners. These companies made their money by infiltrating every aspect of everyday life, including the most personal and creative areas of our lives: our secret passions, our private conversations, our similarities and our dreams. They have surrounded our imagination in much the same way that landowners and robber barons surrounded the once common lands. They promised that by doing so they would open up new realms of human experience, give us access to all human knowledge, and create new kinds of human connections.
Bridle concludes that believing this kind of AI to be truly knowledgeable or meaningful is actively dangerous. This risks poisoning the well of collective thinking and our ability to think in general.
Another author writes that “Big Tech devices, under the guise of autonomy, capture our attention, riveting it to the screen and turning it away from the world around us, depriving it of vitality and preparing it for consumption. Big tech is taking over our minds. We are losing a world full of souls made beautiful by the brutal tutelage of reality. It is a choice between formed souls and formless souls. So what is ultimately at stake is the kind of people our machines produce.”[16]
Sophia Oakes, who specializes in art, says in her post that she asked ChatGPT whether artificial intelligence would replace artists and artists. His answer was: “Artificial intelligence has the ability to create works of art, and there are already AI-created paintings, music and literature that can be difficult to distinguish from those created by humans. However, it is important to note that these AI-generated works of art are still created with human input and guidance. While artificial intelligence can generate new and unique ideas, it lacks the ability to understand human emotions, experiences, and cultural context in the same way that humans do. These are the most important aspects of art that make it meaningful and resonate with its audience. Therefore, it is unlikely that artificial intelligence will completely replace art and artists. Instead, artificial intelligence can be used as a tool to assist in the creative process or generate new ideas, but the final product will still require the unique perspective, interpretation and self-expression of the human artist.”
This is the answer that was generated by the robot based on the data put into it by programmers. Sophia summarizes that creativity is a necessary part of the human experience: a means of reflection, an archive of life, and in the most inspired cases, a reflection of the divine. And without human experience, the work of AI itself is impossible.
Gibson has two passages in Neuromancer that point to this. The first is a word, a password, which must be said at a certain moment to the system in order for it to open. Of course, there is a reference here to the New Testament and the idea of Logos, which AI cannot have. The second is emotions, which AI also does not possess. You can fake them, but these will not be real experiences inherent in a person. To overcome the last obstacle within cyberspace, the hacker in the novel needed anger – without it he could not complete the mission.
Like many science fiction writers, Gibson was a prophet of his era. But there are a lot of gloomy notes in his prophecies. Probably, Elon Musk has the same intuition, who, despite the fact that he himself invests in AI, claims that AI can destroy civilization [18].
Links:
[1] – balkaninsight.com[2] – wired.com
[3] – wsj.com
[4] – japannews.yomiuri.co.jp
[5] – https://ipis.ir/en/subjectview/722508
[6] – ft.com
[7] – weforum.org
[8] – energy.gov
[9], [15] – theguardian.com
[10] – project-syndicate.org
[11] – wired.co.uk
[12] – ft.com
[13] – wired.co.uk
[14] – project-syndicate.org
[16] – theamericanconservative.com
[17] – countere.com
[18] – cnn.com