Will AI end humanity? The p(doom) scales of an OpenAI insider and AI researcher are alarmingly high, peaking at a 99.9% probability

What you need to know

  • An AI researcher claims there's a 99.9% probability AI will end humanity according to p(doom).
  • The researcher says a perpetual safety machine might help prevent AI from spiraling out of control and ending humanity.
  • An OpenAI insider says the company is excited about AGI and recklessly racing to achieve the feat by prioritizing shiny products over safety processes.

We're on the verge of the most significant technological breakthrough with AI, though several impediments might allow us to scale such heights. For instance, OpenAI parts with $700,000 every day to keep its ChatGPT operations running. This is on top of the ridiculous amount of electricity required to power AI advances and water for cooling.

The privacy and security around the technology have left a vast majority of users with concerns. Granted, controversial AI-powered features like Microsoft's recalled Windows Recall referred to as a "privacy nightmare" and "hacker's paradise"  by concerned users. While it's impossible to tell which direction the cutting-edge technology is headed, NVIDIA CEO Jensen Huang says the next wave of AI will include self-driving cars and humanoid robots.

However, guardrails and regulation measures to prevent the technology from spiraling out of control remain slim at best. OpenAI CEO Sam Altman admits "there's no big red button" in place to stop the progression of AI. Unfortunately, AI becoming more intelligent than humans, taking over our jobs, and eventually ending humanity are some of the concerns and predictions that have been made.

An AI researcher says there's a 99.9% probability AI will end humanity, though Elon Musk seemingly optimistic dwindles it down to a 20% chance and says it should be explored anyway. 

Why is the probability of AI ending humanity so high?

A robot that looks like a Terminator looking over AI (Image credit: Windows Central | Image Creator by Designer)

I've been following AI trends for a hot minute. And while the technology has scaled great heights and breakthroughs across significant sectors, one thing is apparent — the bad outweighs the good.  

List of P(doom) values from r/ChatGPT

AI researcher Roman Yampolskiy appeared on Fridman's podcast to discuss the potential risk AI poses to humanity in a broad interview. Yampolskiy says there's a very high chance AI will end humanity unless humans develop sophisticated software with zero bugs in the next century. However, he's skeptical as all models have been exploited and tricked into breaking character and doing things they aren't supposed to do:

"They already have made mistakes. We had accidents, they've been jailbroken. I don't think there is a single large language model today, which no one was successful at making do something developers didn't intend it to do."

The AI researcher recommends the development of a perpetual safety machine that will prevent AI from ending humanity and gain control over it. Yampolskiy says even if the next-gen AI models pass all the safety checks, the technology continues to evolve — thus becoming more intelligent and better at handling complex tasks and situations. 

OpenAI insider says AI will lead to inevitable doom

A picture of the globe with OpenAI's logo wrapped around sharp claws. (Image credit: Microsoft Designer)

In a separate report, former OpenAI governance researcher Daniel Kokotajlo reiterates Yampolskiy's sentiments. Kokotajio claims there's a 70% chance AI will end humanity (via Futurism). As per the list embedded above, it's clear every major player and stakeholder in the AI landscape has different p(doom) values. For context, p(doom) is an equation used to determine the probability AI will lead to the end of humanity.

"OpenAI is really excited about building AGI, and they are recklessly racing to be the first there," stated Kokotajio. As multiple OpenAI execs left the company, super alignment lead  Jan Leike indicated that the ChatGPT maker prioritizes shiny products over safety measures and culture.   

"The world isn't ready, and we aren't ready," wrote Kokotajio in an email seen by the NYT. "And I'm concerned we are rushing forward regardless and rationalizing our actions."

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • taynjack
    I feel like we're looking to A.I. in the wrong way and for the wrong reasons. A.I. shouldn't be developed to replace us, but to enhance our natural abilities. A.I. shouldn't replace our creativity, it should enhance it. I know it's just a fictional movie, but Jarvis didn't replace Iron Man, but worked with him so Iron Man could create faster and better. He was an assistant, not a replacement. I believe we should be developing A.I. with the goal and intent to assist us, not to replace us.
    Reply