From electric cars to Mars colonies, Tesla and SpaceX boss Elon Musk has made his name by insisting that the future can get here faster. But when it comes to artificial intelligence, he sounds very different. Elon Musk has spoken out against artificial intelligence (AI), declaring it the most serious threat to the survival of the human race.
Based on his knowledge of machine intelligence and its developments, Musk believes there is reason to be worried. The billionaire tech entrepreneur called AI more dangerous than nuclear warheads and said there needs to be a regulatory body overseeing the development of super intelligence. He calls AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”
In this post, we will look at some of the famous quotes and videos by Elon Musk on the dangers of artificial intelligence.
“With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like… (wink) yeah he’s sure he can control the demon… doesn’t work out.”
“I don’t know… but there are some scary outcomes… and we should try to make sure that the outcomes are good and not bad.”
“…the pace of (AI) progress is faster than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person, you’d be like ‘Whoa… that’s like… what’s that?”… that would be really obvious. What’s not obvious is a huge server bank in a dark vault somewhere with an intelligence that’s potentially vastly greater than what a human mind can do. It’s eyes and ears would be everywhere, every camera, every microphone, and device that’s network accessible.”
“Artificial intelligence is just digital intelligence. And as the algorithms and the hardware improve, that digital intelligence will exceed biological intelligence by a substantial margin. It’s obvious. Ensuring that the advent of AI is good, or at least we try to make it good, seems like a smart move. We’re not paying attention. We worry more about what name somebody called someone else, than whether AI will destroy humanity. That’s insane. We’re like children in a playground. … The way in which a regulation is put in place is slow and linear. If you have a linear response to an exponential threat, it’s quite likely the exponential threat will win. That, in a nutshell, is the issue.”
“As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger. I do think we need to be very careful about the advancement of AI.”
“Most people don’t understand just how quickly machine intelligence is advancing, it’s much faster than almost anyone realized, even within Silicon Valley… (When asked “Why is that dangerous?”): If there is a superintelligence… particularly if it is engaged in recursive self-improvement…”
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
“AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.”
“If there is a superintelligence who’s utility function is something that’s detrimental to humanity, then it will have a very bad effect… it could be something like getting rid of spam email… well the best way to get rid of spam is to get rid of… humans.”
“I’m not against the advancement of AI – I want to be really clear about this. But I do think that we should be extremely careful.”
“Pay close attention to the development of AI, we need to be very careful in how we adopt AI and make sure that researchers don’t get carried away.”
“AI is much more advanced than people realize. … Humanity’s position on this planet depends on its intelligence so if our intelligence is exceeded, it’s unlikely that we will remain in charge of the planet.”