OpenAI safety researcher announces their departure over safety concerns while Anthropic CEO claims AI will extend human life "to 150 years"

A photo taken on February 26, 2024 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen.
OpenAI's modern ChatGPT logo. (Image credit: Getty Images | KIRILL KUDRYAVTSEV )

"IMO, an AGI race is a very risky gamble, with huge downside," indicated OpenAI safety researcher Steven Adler as he announced his exit from the AI startup after four years of service. "No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time."

The safety concerns raised in Adler's departure message from OpenAI echo AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, claim about a 99.999999% probability of AI ending humanity.

Honestly I'm pretty terrified by the pace of AI development these days. When I think about where I'll raise a future family, or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point? Today, it seems like we're stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.

Ex-OpenAI researcher, Steven Adler

According to the researcher's p(doom) values, the only way to avert inevitable doom is not to build AI in the first place. Perhaps that option is out of the window with OpenAI and SoftBank's $500 billion bet on Stargate to facilitate the construction of data centers across the United States to foster sophisticated AI advances.

Adler isn't the first employee to depart from the ChatGPT maker over safety concerns. Last year, Jan Leike, OpenAI's Head of alignment, super alignment lead, and executive, announced his departure. The former alignment lead disclosed that he'd disagreed with OpenAI's leadership over core priorities on next-gen AI models, security, monitoring, and more.

Perhaps more concerning, Leike indicated that the safety process had taken a back seat as shiny products like AGI gained precedents. Additionally, OpenAI reportedly rushed through GPT-4o's launch, allocating less than a week to its team to run safety tests. Sources with close affiliations indicated OpenAI sent invitations for the product's launch celebration party before the safety team ran tests. While an OpenAI spokesman admitted that GPT-4o's launch was rushed and stressful to the safety team, he claimed that the company didn't cut any safety corners to meet the tight deadline.

Anthropic CEO says AI will extend human lifespan by 150 years by 2037

Anthropic CEO Dario Amodei says AI could double lifespans in 10 years. (Image credit: Getty Images | Chesnot)

While speaking at the World Economic Forum in Davos, Switzerland, Anthropic CEO, Dario Amodei claimed generative AI could double human lifespans within 5 to 10 years (via tsarnick on X):

"It is my guess that by 2026 or 2027, we will have AI systems that are broadly better than almost all humans at almost all things. I see a lot of positive potential."

The executive highlighted major improvements across tech, military, and health. He believes that AI could increase human lifespans despite reports of the technology prompting existential doom.

According to Anthropic CEO Dario Amodei:

"If I had to guess, and you know, this is not a very exact science, my guess is that we can make 100 years of progress in areas like biology in five or ten years if we really get this AI stuff right. If you think about, you know, what we might expect humans to accomplish in an area like biology in 100 years, I think a doubling of the human lifespan is not at all crazy. And then if AI is able to accelerate that, we may be able to get that in five to ten years. So that’s kind of the grand vision. At Anthropic, we are thinking about, you know, what’s the first step towards that vision, right? If we’re two or three years away from the enabling technologies for that."

Amodei admits that “this is not a very exact science,” meaning that the progression of AI might take a different route amid claims that scaling laws have begun due to a lack of high-quality content for training AI models. Interestingly, OpenAI CEO Sam Altman claims AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity.

OpenAI is racing toward AGI, especially after Altman confirmed that his team knows how to build AGI and that it could be achieved sooner than anticipated with current hardware. He also hinted that the company could be shifting focus to superintelligence, which could trigger a 10x surge in scientific AI breakthroughs each year as revolutionary as a decade.

CATEGORIES
Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.