Sam Altman re-prioritizes safety processes at OpenAI after it seemingly took a backseat for 'shiny products'

Sam Altman
Image of OpenAI CEO Sam Altman (Image credit: Bloomberg via Getty Images)

What you need to know

  • OpenAI CEO Sam Altman recently highlighted new safety updates for the company.
  • The ChatGPT maker will allocate up to 20% of its computing resources to safety processes.
  • The company will give the US AI Institute early access to its next-gen model "to push forward the science of AI evaluations."

OpenAI CEO Sam Altman highlighted new updates for the company's safety policies. The top executive indicated that the ChatGPT maker is living up to its promises and will allocate up to 20% of its computing resources to safety processes across its tech stack.

Additionally, Altman disclosed that OpenAI has been working closely with the US AI Safety Institute and has agreed to grant the institute early access to its next-gen model "to push forward the science of AI evaluations."

And finally, the top executive asks all OpenAI employees (current and former) to openly raise concerns about the company's trajectory and product development.

🔥The hottest trending deals🔥

OpenAI maybe 'safer' but is it enough?

Hands grasping the planet Earth in a pixel art style with OpenAI logo (Image credit: Microsoft Designer)

Will generative AI lead to the end of humanity? Is AI safe and private? These are some of the questions lingering in concerned users' minds as the technology becomes more prevalent and advanced, with companies like OpenAI, Microsoft, and Google at the forefront. 

AI has been under fire for several reasons, including copyright infringement, high water and power consumption, and more. 

Days after launching its magical GPT-4o model with reasoning capabilities, OpenAI lost several members from its safety and super alignment team. A former staffer disclosed that he left the ChatGPT maker after constantly disagreeing with top management over core priorities on next-gen models, including safety, preparedness, monitoring, and more. 

The staffer raised a critical issue regarding OpenAI's safety priorities after stating that the company prioritizes shiny products while safety processes take a backseat. Around the same time, more former OpenAI employees started emerging and disclosing intricate details regarding the company's operations. 

However, the revelations were short-lived. A report disclosed that OpenAI employees are subjected to nondisclosure and non-disparagement, preventing them from criticizing the company or how it runs its operations even after leaving the company. Even admitting that they were subjected to the agreements is considered a violation of the NDA. 

This seemingly caused employees to remain tight-lipped about the company's operations or risk losing their vested equity, with a former employee indicating that working for OpenAI felt like the Titanic of AI

Sam Altman admits the clause was part of OpenAI's non-disparagement terms. However, it has since been voided. He calls current and former employees to raise concerns about the company's trajectory "and feel comfortable doing so" as their vested equity will remain untouched. 

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.