OpenAI reportedly sent RSVPs for GPT-4o's launch party even before testing began — pressuring the safety team to speed through the process in under one week

ChatGPT on Android
ChatGPT on Android (Image credit: Shutterstock)

What you need to know

  • OpenAI's GPT-4o launch was seemingly rushed, leaving the safety team with little time to test the model.
  • Sources disclosed that OpenAI had sent invitations for the product's launch celebration party before the safety team ran tests.
  • An OpenAI spokesman admits the launch was stressful for its safety team, but insists the firm didn't cut corners when shipping the product. 

OpenAI has been under fire for the past few months, with several former employees claiming it prioritizes shiny products over safety processes. As it happens, the ChatGPT maker won't beat these allegations soon.

Several employees recently disclosed that OpenAI prioritized speed over thoroughness (via The Washington Post). According to sources, OpenAI's safety team was seemingly pressured to rush through the new testing protocol for GPT-4o. 

For context, it's critical for sophisticated AI tools to go through thorough testing processes to identify loopholes that bad actors might exploit to cause harm.   OpenAI's leadership reportedly pressured the safety team to rush through the new testing protocol to meet a rigid GPT-4 Omni May launch date.

According to the source, OpenAI sent out invitations for the product's launch celebration party before the safety team even ran tests. "They planned the launch after-party before knowing if it was safe to launch," the source added. "We basically failed at the process."

OpenAI spokesman Lindsey Held admits that the launch was stressful for its safety team, but the company "didn’t cut corners on our safety process." Held insists the company conducted thorough testing before shipping GPT-4o to broad availability

An OpenAI preparedness team representative corroborates Held's statement and says the company met its regulatory obligations. However, the representative admits the testing protocol was under a tight schedule. "OpenAI is now rethinking our whole way of doing it and the Omni approach was just not the best way to do it," the representative added.

Safety seems like an after thought for OpenAI

OpenAI logo (Image credit: OpenAI)

In a separate report, a former OpenAI staffer, William Saunders, claimed that he left the company because he didn't want to end up working for the Titanic of AI. "They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling," Saunders added. 

Saunders' sentiments were previously echoed by OpenAI's former alignment lead, who admitted that he'd disagreed with the firm's top management over its decision-making process and core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, and more. The ChatGPT maker's decisions and prioritizing shiny products over safety prompted a mass departure of the safety team from the firm

OpenAI's long-term plan is to achieve AGI. But as things stand, it's not going to be easy. On one side, AI's resource-hungry demands leave OpenAI with the short end of the stick. As you may know, OpenAI spends up to $700,000 per day to keep ChatGPT running. This is on top of its high water and electricity demand.

On the other hand, there's concern about AI becoming smarter than humans, taking over jobs, and eventually ending humanity. According to the latest p(doom) values by an OpenAI insider and AI researcher, there's a 99.9% chance AI will end humanity

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.