Perhaps the recent media hype about artificial intelligence (AI) and “not-really-so” OpenAI’s program called ChatGPT, made many people worried about AI. Some – even inside AI – are convinced that, we can’t afford an AI-shima – a Fukushima-like atomic accident inside our computers and the Internet. Like another nuclear accident that affects millions, we can’t afford to have a global AI accident.
Even – or better, “in particular” – IT experts have warned us with, often rather drastic words against unrestrained AI developments. Again, the neoliberal hallucination of an ever illusive unregulated free market does not seem to deliver.
Instead, a tighter regulation is needed for – the worst nightmare of the super-wealthy as it challenges, once again, their favourite ideology of, neoliberal techno-libertarianism. At the same time, artificial intelligence is rapidly becoming an interlocutor for programs like ChatGPT – incognito or not.
Yet, despite the media pretense that ChatGPT is all powerful, such AI systems can also generate rafts of incorrect answers. As a consequence, the human being is bitterly needed. As for ChatGPT, on a rather simple search term like, “Thomas Klikauer”, ChatGPT delivered the following:
Thomas Klikauer is a sociologist and a professor at the School of Business at Western Sydney University in Australia. He has published extensively on topics such as management, globalization, and corporate social responsibility. He is also known for his critical analysis of capitalist systems and his advocacy for alternative economic models that prioritize social and environmental sustainability. Klikauer has authored or co-authored several books, including “The Management Theory of Frank Stilwell“, “Critical Management Ethics”, and “Megacities: Our Global Urban Future“.
Within seconds of typing my name into ChatGPT, ChatGPT generated four grave errors about the author of this article. These could have been avoided by, for example, simply looking at either Wikipedia or klikauer.wordpress.com. Unlike the still very futuristic ChatGPT-6, the latest ChatGPT failed to do that. Instead, it delivered – rather convincingly for the unsuspecting observer – four errors:
Thomas Klikauer is not a sociologist – I have two degrees in political science and a PhD in business studies (wordpress.com) but none in sociology, and I have not published in sociology journals;
I am also not a professor – at Australian universities, we are called lecturers, my position is “senior lecturer” – not a professor;
I have never co-authored a book on “The Management Theory of Frank Stilwell” – that is plainly wrong. In fact, it does not even make sense as Frank Stilwell does not publish on “management theory” – not to mention, “management” is not a theoretical subject, it is akin to storytelling; and finally,
I have never written a book called “Megacities” on “Our Global Urban Future”, nor do I have any intention to write such a book. In fact, I will – very likely – never write anything on either “megacities” or on “our global urban future”.
Yet, the advent of ChatGPT does not make only the IT experts worried about AI. In an open letter, no less than 27,567 AI and IT experts demand that firstly, society should set limits before using AI in an unrestraint fashion. One signature on the “let’s pause a little bit on AI” letter came from IT scientist Stuart J. Russell, who is the author of a standard AI textbook called “Artificial Intelligence – A Modern Approach”. He is a professor at the University of Berkeley and despite working on artificial intelligence for decades, Russell believes that, it’s a recipe for a disaster. Russell has also found rather sweeping words on the potential problems caused by AI.
Since about the end of last year, AI has been on everyone’s lips. Thanks to programs like ChatGPT or image generators like Midjourney, AI has reached the mainstream. Meanwhile, Stuart J. Russell went even further in an interview where he strongly warns of the impending consequences of AI technology. Many have argued that the current surge in unregulated AI is a turning point for the industry and perhaps also for humanity.
Russell also believes that we can’t afford a Chernobyl of AI. In the meantime, AI on online media platforms has already led to a, by now infamous, picture of the Pope in a fashionable puffer coat that sped throughout the Internet.
It shows – just like the equally faked “wishful thinking” photo of Donald Trump being arrested – why we can no longer trust our eyes. As a consequence, many now fear an impending AI catastrophe.
At the same time, Russell did not choose the comparison between Chernobyl and AI by accident. For him, the current situation is comparable to the construction of a nuclear power plant – without the necessary safety measures. Instead, Russell argues, that if I wanted to build a nuclear power plant, the government would demand proof from me that it is safe, can withstand an earthquake, and does not explode.
Even in the extremely unlikely case that someone can actually assure the seemingly impossible, in the safety of an atomic plant, for example, the government is unlikely to say, just start building the atomic plant, it’s all right, all will be fine. It is for this reason, that the AI industry should also be given strict limits by the government – albeit in an “industry-people-government” cooperation.
One of the most serious problems with current AI developments is that we simply don’t know exactly how this technology works – apart from a 1.29 min video on YouTube. At its basic level, artificial intelligence is based on training AI models on data sets and then by trying to steer them in a certain direction. It ultimately analyses existing statistics to predict a most likely future scenario – a kind of Bayes’ statistics on steroids as shown, for example, on Stanford CS221. They are guesses on what data mean.
Apart from all this, we still hope that programs like ChatGPT can provide a mathematical proof for the style of a poem written by Shakespeare. But most of us have no idea how they really do it – hence they appear to be magic. Most would not admit that we just don’t know.
This fact alone also makes it far more difficult to avoid the misconduct of AI. It is a bit like scolding a dog where one repeatedly tells a dog that he is a ‘bad dog’ when he did something wrong, in the hope that he will learn from it. In another open letter – published at the end of March, signed by both tech billionaire Elon Musk and Apple founder Steve Wozniak, in addition to Russell’s – in this letter, they called for an immediate “halt” on the development of AI.
But that is still not enough for Russell. From his point of view, the six months’ requirement to pause AI is simply not sufficient. In his opinion – as outlined in the open letter on AI – the pause is designed to think about AI and to develop guidelines for the safe use of AI. This would, firstly, have to define different types of AI, and secondly, only after a proof of compliance could companies then issue their AI programs.
In cases where these guidelines cannot be defined or if it is not possible to prove their compliance – then that AI program would be put on hold. However, it is important to understand that such a demand is not to be understood as a rejection of AI.
Russel has been researching on artificial intelligence for 45 years and he loves it. Yet, he still believes that its potential to positively change the world is limitless. At the same time, our societies do not want to experience the IT version of another Chernobyl or Fukushima. This would really have serious consequences. Worse, what consequences such an AI-shima-like nuclear accident might actually look like, we just don’t know – yet. Accordingly, IT experts demand that society needs to take the responsibility seriously.
Perhaps not as threatening as a global AI Armageddon is the fact that AI can crack almost any password you use currently. AI can break passwords in minutes. And the ability of AI to hack passwords opens up tremendous opportunities for criminal hackers. But the danger does not just come from the AI-driven password cracking programs alone. Perils also come from something not directly linked to AI-supported passwords’ hacking programs.
Having a safe password is the raison d’état why passwords should be protected against unauthorized access. Set against this is the fact that artificial intelligence is able to crack countless passwords in a very short time. Since AI operates with machine learning, this means that these AI password cracking programs can teach themselves on how to get better and better in cracking a password.
In reality, the problem are the passwords themselves. At first glance, it seems quite impressive. Within minutes, Github’s AI-based software program “PassGAN” is supposed to crack every password under seven characters. It can hack a password in under an hour and it has hacked 65% of all passwords tested.
What is likely to trigger a reflexive threat like “OMG! My password was hacked!”, shows one thing above all. All too many people make the same mistake over and over again when choosing a password.
Worse, repeating them makes it easy for AI to hack into passwords. Yet, choosing an easy crack-able password can be avoided. In 2023, the most common passwords were: 123456, 123456789, qwerty, password, 12345, qwerty123, 1q2w3e, 12345678, 111111, 1234567890. When using either one of these, AI will take advantage of this.
In order to be able to guess your password(s) so quickly, AI can be trained on databases of real passwords. Based on an established patterns recognized there, a software program such as PassGAN simply guesses passwords. This is easy for password hacking programs to guess because the majority of passwords are simply bad to start with.
As such, many passwords are to blame for this. For years, people have been taught that passwords are complicated and, here it gets really bad, passwords need to be changed regularly. However, this led to the very opposite. Because people simply cannot remember too many frequently changing passwords, hence, they opt for shorter ones.
Even ghastlier, short passwords are often constructed according to certain schemes – your pet’s name, your birthday, etc. AI can detect and use these “easy to remember” themes and crack your passwords.
For AI machines, “easy to remember password themes” like your name, your phone number, 1-2-3-4-5-6, etc. are very easy to guess. Even though sophisticated algorithm-driven programs like PassGAN do not really outperform other approaches to password cracking, they – very stoically and with sheer eternal determination – will try and try and try different letter and number combinations of your password until they crack it.
Yet, AI can consider the probability of placements and any number and letter combinations. Worse, AI password-cracking programs tend to perform even better than the human password cracker.
Together with gigantic lists of known words, already used passwords and so-called “mangling”, in which popular variants are tried out, passwords can be cracked just as quickly or even faster without the assistance of AI. The ideal way to deal with passwords is to make it as difficult as possible for an AI machine to guess it. And there is one rule above all others,
there is no password that is too long.
Today’s software can try out billions of combinations per second. As a consequence, every password under “five” characters is cracked almost immediately – no matter how complex. On the other hand, a 13-digit password consisting only of random “lowercase” letters can be calculated, on average, within two months.
If just one capital letter is added, it extends the time needed to crack a password to about 1,000 years. Each additional variable such as numbers or special characters (@ ^ * # & $ +) accelerates the time needed upwards. But including more letters of the alphabet also helps. A password of 18 lowercase letters would take two million years to calculate.
A sequence of several unrelated words – called a passphrase – is therefore more secure than any short but complex password. This also helps people to remember the passwords. In any case, IT security experts have long since agreed to abandon one of the most pointless requirements of, constantly changing your password. This only ensures that you choose an easy one. The clearest anti-PassGAN recommendation is therefore,
a password should only be changed if there are signs that it has been leaked or stolen.
However, you should still refrain from using the same passwords multiple times – do not use the same password for different applications. To keep an overview, a password manager – 1Password, Dashlane, Keeper, Bitwarden, LastPass, Apple Password Manager – is recommended. With this, passwords can be created and saved automatically. Then all you have to remember is the password for your password manager.