Artificial Intelligence (AI) is all around us. It is reshaping our economies, promising to generate productivity gains, improve efficiency and lower costs. It contributes to better lives and helps people make better predictions and more informed decisions. However, this “narrow AI,” designed for solving specific, narrow problems, is something distinctly different from Artificial General Intelligence (AGI).
AGI, also known as ‘full AI’ or ‘strong AI’ systems, aims to embed general intelligence abilities in machines. Unfortunately, there is no evidence of practical, strong AI systems that exist today that fully match human-like intelligence. However, there have been active attempts to build libraries and software to build such systems.
The majority of today’s AI methods try to mimic the functional or the structural behavior of the human brain and neural networks through knowledge representation, statistical models, and complex networks.
AGI is often envisioned as being self-aware and capable of complex thought. As a result, it has been a staple of science fiction, appearing prominently in popular films such as 2001: A Space Odyssey, Terminator, and I, Robot. Interestingly, in each of these films, the machines go beyond their original programmed purpose and become violent threats to humans.
This post will discuss how AGI relates to the well-being of humans, including how machines can help us and how they could potentially harm us.
AGI in wrong hands
If generally intelligent machines fall into the wrong hands, they can turn bad and become a threat. Such machines could be used as weapons by small, politically motivated terrorist groups or large military organizations. AGI could give such groups the ability to spy on the rest of the population, gather and synthesize information, and plan attacks against them. AGI developers will have little idea who will end up with their technology; they could unwittingly be building deadly weapons to be used against humanity. Because AGI can be used as a weapon, its development has many of the same moral implications as the development of other weapons.
The Artilect War
Another danger to humanity is the possibility that a good machine, one designed to be benevolent, will go bad, as Hal 9000 did in 2001: A Space Odyssey. Evolutionary and learning algorithms may result in a system that is essentially a black box, with inner workings so complex that experts may be unable to fully comprehend them. Such machines, like humans, may have extremely complex psychologies, so the potential for hostility is non-zero. Even if special constraints are imposed on such systems’ behavior, rules such as “do not kill” could be overwritten by subsequent updates initiated by the AGI system itself. The public may be terrified of such a scenario, leading to what one researcher refers to as “The Artilect War.”
Seed AI
If AGI is developed, many benefits would likely follow: It could lead to exponential advances in every scientific field. Machines will be able to do much of the epistemic labor. AGI could also be applied to fields that require an extraordinary amount of training. For example, doctors and other professionals could be replaced by efficient machines that don’t get tired, require extensive training, and make fewer mistakes.
Ray Kurzweil, one of our time’s most optimistic futurists, believes that exponential technological advancements will lead to a technological Singularity, an intelligence explosion owing to the development of something called Seed AI. Any intelligent system capable of very rapid exponential gains in intelligence is referred to as Seed AI. Seed AI is thought to be capable of modifying its own programming to create a smarter self. This updated version would then be even better at programming, allowing it to create even smarter subsequent updates, and so on, leading to an infinite intelligence feedback loop.
In his book “The Singularity is Near,” Kurzweil predicts that by 2045, the Singularity will emerge as computers become smarter and more capable than humans. While Kurzweil acknowledges that many details are unknown, he believes in the possibility of a Singularity. Furthermore, he claims that it will provide us with numerous benefits and will most likely be friendly.
Let’s sum up. AGI is, perhaps, not far from reality, although the exact future of AGI is still obscured in mystery. Today, it is not merely science fiction, nor is it an insane fantasy of mad scientists. While most AI research is currently focused on narrow AI, a small group of serious scientists is concerned with the more difficult problem of creating AGI. The consequences of this work can drastically affect the future of humanity, for sure.