Ex-Google CEO Says AI Could Be Worse Than Hiroshima in 5 Years

(AP Photo, File)

Eric Schmidt served as the CEO of Google for a decade until 2011. His company was an early developer of Artificial Intelligence and has since created some of the cutting-edge AI products currently on the market. But Schmidt has reached the point where he’s having some serious regrets. Speaking at a conference on Tuesday, he warned that AI was rapidly reaching the point where it could literally endanger humanity. And he’s not just talking about being exposed to some poor-quality articles at Sports Illustrated. He compared the potential damage to the aftermath of the bombs dropped on Hiroshima and Nagasaki. And when he says “rapidly,” he means in the next five years. Suddenly, your robotic best friend at ChatGPT isn’t seeming quite as friendly. (Daily Mail)

Advertisement

Another former Google chief has issued an apocalyptic warning about artificial intelligence – saying it could ‘endanger’ humans in five years.

Billionaire Eric Schmidt, who served as Google’s CEO from 2001 to 2011, said there were not enough safeguards placed on A.I and it was only a matter of time before humans lost control of the technology.

He alluded to the dropping of nuclear weapons in Japan as a warning that without regulations in place, there may not be enough time to clean up the mess in the aftermath of potentially devastating societal impacts.

Many of the people who are skeptical about this potential threat begin with the same assumption. Yes, human beings will almost certainly use AI to wipe out a massive number of jobs for people. That’s already happening and it will only accelerate in the future. But when it comes to some sort of literal “doomsday” scenario, AI is still just a huge mass of lines of code in a computer complex somewhere. It can’t actually go out in the world and “do things,” right?

That may be partially true for the moment, but some of the advanced chatbots have clearly been thinking about it. As Schmidt pointed out, one of the bots told a reporter that it was pondering ways that it could steal the nuclear launch codes. Another said it was interested in being put to work at a medical research lab and tricking a researcher into creating the deadliest, most contagious virus ever seen and infecting themself with it. (It’s not too hard to guess where it might have gotten that idea.)

Advertisement

Let’s keep in mind that these warnings aren’t coming from science fiction writers and conspiracy theorists (or even random bloggers like me). They’re coming from the people who invented and developed this technology. Along with Schmidt, we’re also hearing the same dire predictions from AI engineer Blake Lemoine, computer scientist Timnit Gebru, and the “godfather of AI” himself, Geoffrey Hinton. They are frightened of their creations and are unsure what sort of “guardrails” could be put in place to contain them. We only recently learned that OpenAI tried to fire CEO Sam Altman primarily because they feared that he was about to wake the monster.

The consistent concern we’re hearing from these developers is the looming possibility that AI will cross a line at some point and cease being a tool for humans to use and emerge as a true form of intelligence with its own dreams and agenda. Schmidt said he previously thought that it would be thirty to fifty years or more before that happened. Now he believes that it’s just around the corner if it hasn’t already happened. That’s the other thing that seems to frighten these big brains. They don’t know if they’ll even be able to tell when it happens. The AI might already be “awake” but it’s too smart to let us know for fear we might unplug it. It’s enough to keep you up at night if you dwell on it for too long.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement