The fear of an imminent AI apocalypse is on the rise. Renowned science and technology personalities such as Stephen Hawking, Elon Musk, Steve Wozniak, and Bill Gates have expressed concern about the risks posed by Artificial Intelligence.
Most researchers agree that it is a perfect recipe for catastrophe when a superintelligent AI becomes autonomous with the ability to make decisions and act on them. Because it is unlikely to exhibit human emotions, and there is no reason to expect AI to be intentionally benevolent or malevolent. In the hands of the wrong person, these machines — notably autonomous weapons which are programmed to kill, could easily cause mass casualties. These weapons would be designed to be extremely difficult to “turn off” to avoid being thwarted by the enemy. So humans could plausibly lose control of such a situation.
According to Elon Musk, “as AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger. I do think we need to be very careful about the advancement of AI.”
Let’s look at some of the common risks that, experts believe, might happen with the advancement of AI in the future.
Autonomous weapons race
An AI system is programmed to do something beneficial, but it can develop a destructive means to fulfill the goal you have set. For example, you’re asking your smart car to take you to the airport as quickly as possible. It might get you there, no matter what, but by creating a lot of chaos or even casualties on the way! With the widespread availability of open-source AI technologies, even a single person can cause widespread violence with face detection equipped drones. Imagine self-flying drones capable of detecting a person’s face and carrying out an attack. Scary.
Looking at the possibility of similar autonomous arms race on a global scale between governments and nations in the future, Russia’s President Vladimir Putin said: “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”
Hacking and phishing
When hackers and criminals have access to AI and machine learning techniques, they can create an AI-based program that can hack into people’s computers and communicate with the victims until they receive the ransom. It can automate all tasks such as payment processing, presumably collecting ransoms faster. Hackers could then possibly target more people at once without actually having to communicate with them and demand payments. Phishing scams could also become more prevalent and effective. People’s online information and behaviors presumably scraped from social networks, could be used to automatically create custom emails capable of imitating the writing style of people’s friends.
Fake news and propaganda
AI advances led researchers to create realistic audio and videos of political figures that are designed to look, and talk like real-life counterparts. For instance, Washington University’s AI researchers recently created a video of former President Barack Obama giving a speech that looks incredibly realistic but was fake. People could create “fake news reports,” which could show that the leaders of the state seem to make inflammatory comments they have never made. AI can target them by spreading propaganda and distribute whatever information they like, in whatever format they find most convincing.
Social manipulation and discrimination
Face recognition is widely used for police and government surveillance in many countries, particularly China. Since technology can gather, track, and analyze so much about you, using this information against you is possible for these machines. The technology is a tremendous way to invade people’s privacy, and training data biases make it likely to automate discrimination.
Last year, bias was found in numerous commercial tools. Vision algorithms have already failed to recognize women or people of color, and it has been proven that hiring programs perpetuates discrimination that already exists.
Algorithms are very efficient in performing target marketing in social media because they know who we are, what we like, and are incredibly good at surmising what we think. Investigations are still underway to determine the fault of Cambridge Analytica and others associated with the company that used the data from 50 million Facebook users to try to sway the outcome of the 2016 U.S. presidential election and the U.K. Referendum on Brexit. If the charges are correct, it illustrates the AI’s power to manipulate socially.
Invasion of privacy
With home voice assistants like Google Home and Amazon Alexa listening to user conversations, people’s fears about being tracked based on what they say, think, or search online are increasing. We have cameras almost everywhere, and with algorithms for facial recognition, it is now possible to track and analyze every move of an individual online as well as when they are doing their daily business. When Big Brother watches you and then makes decisions based on that intelligence, it’s not just an invasion of privacy that can quickly turn to social oppression.
China has implemented a social credit system over the past few years that tracks and evaluates people heavily based on a combination of mostly minor offenses, internet activities, financial records, private messages, health background, dating history, etc. People with low social credit have been barred from traveling on plane tickets, enrolling their children in elite schools (despite the skills and abilities of their children), or even leaving the country under certain circumstances. Conversely, the system benefits those with high social credit scores. The same adverse effects will occur elsewhere if a social credit system is applied globally.
Loss of jobs
AI machines are more productive and successful than their human counterparts. Many human-only concerns are eliminated-personal and emotional situations; lethargy and exhaustion; and boredom and distraction. Machines fuelled by AI do not have human emotional capacity. Mechanized employees are, therefore, ideal because they can not possibly be affected by emotional occurrences inside or outside the workplace. In addition, machines programmed with AI and powered by electricity or batteries will not face a human-like “brain drain.” Machines fueled by AI are not getting tired and are not going to slow down. They will not find repetitive tasks to be boring or tedious, such as packing, sealing, and stamping boxes. They make no mistakes. Essentially, many manufacturers prefer AI automated employees, and the loss of jobs is imminent for humans.
Let’s sum up. It is possible to misuse any powerful technology. Unfortunately, we will also see it being used for dangerous or malicious purposes as our AI capabilities expand. Since AI technology is advancing so rapidly, it is vital that we start debating the best ways for AI to develop positively while minimizing its potential for destruction.