AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says it should be explored more despite inevitable doom

Elon Musk
(Image credit: Tesla)

What you need to know

  • Elon Musk says AI has the potential of taking over or even ending humanity, placing the probability of this happening between 10 and 20 percent.
  • Regardless of the looming danger, Musk says more advances in the AI landscape should be explored.
  • An AI Safety researcher says the probability of AI ending humanity is higher than Musk perceives, further stating that it's almost certain and the only way to stop it from happening is not to build it in the first place.
  • Other researchers and executives echo similar sentiments based on the p(doom) theorem.

Generative AI can be viewed as a beneficial or harmful tool. Admittedly, we've seen impressive feats across medicine, computing, education, and more fueled by AI. But on the flipside, critical and concerning issues have been raised about the technology, from Copilot's alter ego — Supremacy AGI demanding to be worshipped to AI demanding an outrageous amount of water for cooling, not forgetting the power consumption concerns

Elon Musk has been rather vocal about his views on AI, brewing a lot of controversies around the topic. Recently, the billionaire referred to AI as the "biggest technology revolution," but indicated there won't be enough power by 2025, ultimately hindering further development in the landscape.  

While at the Abundance Summit, Elon Musk indicated that "there's some chance that it will end humanity." And while the billionaire didn't share how he came to this conclusion, he says there's a 10 to 20 percent chance AI might end humanity (via Business Insider). 

Strangely enough, Musk thinks that potential growth areas and advances in the AI landscape should still be explored, citing "I think that the probable positive scenario outweighs the negative scenario."

AI is all doom and gloom according to p(doom)

A robot that looks like a Terminator looking over AI

(Image credit: Windows Central | Image Creator by Designer)

While speaking to Business Insider, an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy disclosed that the probability of AI ending humanity is much higher. He referred to Musk's 10 to 20 percent estimate as "too conservative."

READ MORE: Microsoft President compares AI to the Terminator

The AI safety researcher says the risk is exponentially high, referring to it as "p(doom)." For context, p(doom) refers to the probability of generative AI taking over humanity or even worse — ending it. 

We all know the privacy and security concerns revolving around AI, the battle between the US and China is a great reference point. Last year, the US imposed export rules preventing chipmakers like NVIDIA from shipping chips to China (including the GeForce RTX 4090).

The US government categorically indicated that the move wasn't designed to rundown China's economy, but a safety measure designed to prevent the use of AI in military advances. 

Elon Musk raised similar concerns about OpenAI's GPT-4 model in his suit against the AI startup and its CEO Sam Altman. The lack of elaborate measures and guardrails to prevent the technology from spiraling out of control is alarming. Musk says the model constitutes AGI and wants its research, findings, and technological advances easily accessible to the public. 

Most researchers and executives familiar with (p)doom place the risk of AI taking over humanity anywhere between 5 to 50 percent, as seen in The New York Times. On the other hand, Yampolskiy says the risk is extremely high, with a 99.999999% probability.  The researcher says it's virtually impossible to control AI once superintelligence is attained, and the only way to prevent this is not to build it. 

In a separate interview, Musk said:

"I think we really are on the edge of probably the biggest technology revolution that has ever existed. You know, there's supposedly a Chinese curse: 'May you live in interesting times.' Well, we live in the most interesting of times. For a while, it was making me a bit depressed, frankly. I was like, Well, will they take over? Will we be useless?"

Musk shared these comments while talking about Tesla's Optimus program, and added that humanoid robots are just as good as humans when handling complex tasks. He jokingly indicated that he hoped the robots would be nice to us if/when the evolution starts. 

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • GraniteStateColin
    AI can only do those things it's set to do. Yes, it can find interesting combinations and patterns, which makes in a very helpful tool. But it has no hands. It can neither attack us nor even cause problems unless we give it that ability. I don't understand the concerns. They seem utterly irrational.

    Perhaps the thinking is that there will an AI-war between nations, say between U.S. and China, where each side tells its AI to invade the other to cause them pain. Like China tells its AI to shut down the US power grid. US tells its AI to stop Chinese naval maneuvers. And in doing that, each AI gains control over the other nations physical resources, after that, if the command were to use that control to cause harm, then I suppose we could suffer as a result. That seems like the most likely harmful scenario, but also not a power I would expect either side would grant to its AI.

    Maybe the good science fiction story here is what happens when one nation or a terrorist group sends an AI into another nation to take over something (power, weapons, etc.), and then the target nation in trying to stop it, engages its own AI and somehow they blend together as they work to reprogram the AI to weaken it, but results in some form of Internet-based Ultron.

    But hard to see how that's more than a science fiction story as it would require high levels of stupidity and consensus among experts around that stupidity. I would rate the likelihood of a catastrophic result (as in anything that could be described as wiping out civilization or degrading our standard of living by hundreds of years or more) from AI at under 1%. Not impossible, but improbable in the extreme.
    Reply
  • ShinyProton
    Windows Central said:
    Elon Musk indicated that "there's some chance that it will end humanity, further placing the likelihood of this happening between 10 and 20 percent.

    AI safety researcher warns there's a 99.999999% probability AI will end humanity, but Elon Musk "conservatively" dwindles it down to 20% and says... : Read more
    Seriously.
    Journalists should refrain from publishing material posted by ONE cuckoo like this.

    And does anyone really think we can go back? Even with government ruling, there's no way the toothpaste will be put back in the tube.

    AI is another tool, like fire. Humanity will find a way to control it - although it doesn't mean accidents won't happen...
    Reply
  • fjtorres5591
    GraniteStateColin said:

    GraniteStateColin said:
    Maybe the good science fiction story here is what happens when one nation or a terrorist group sends an AI into another nation to take over something (power, weapons, etc.), and then the target nation in trying to stop it, engages its own AI and somehow they blend together as they work to reprogram the AI to weaken it, but results in some form of Internet-based Ultron.

    But hard to see how that's more than a science fiction story as it would require high levels of stupidity and consensus among experts around that stupidity. I would rate the likelihood of a catastrophic result (as in anything that could be described as wiping out civilization or degrading our standard of living by hundreds of years or more) from AI at under 1%. Not impossible, but improbable in the extreme.

    The good SF story you envision was written in 1966 by Dennis Feltham Jones, aka D.F. Jones.
    It is called COLOSSUS and was adapted into a Hollywood movie in 1970: THE FORBIN PROJECT. It had two sequels, THE FALL OF COLOSSUS and COLOSSUS AND THE CRAB.

    Look them up.

    What the fool naysayers seem to forget is that there is no inteligence in AI nor is there agency or initiative. And that AI relies on electricity and like Dilbert said/did, humans can always pull the plug.

    AI in the real world, unlike inside the pea brains of the idiots in academia, "AI" is just software tools, slaves to human needs and initiative. If doom comes, unlikely as it is, it won't be from "AI" but from humans doing what humans do.
    Reply
  • powerupgo
    There is no rational way to calculate the odds for the risk of AI. It could be anywhere from zero to a near certainty. There is no sample size to begin with and nothing to derive the risk factors from. All they can do is speculate and give a bunch of hypotheticals coupled bad science fiction scenarios.
    Reply