13 former and current OpenAI employees with endorsements by 'The Godfathers of AI' outline 4 key measures to address AI risks
OpenAI employees are taking it upon themselves to regulate and address some of the risks arising from the rapid prevalence of AI.
What you need to know
- Current and former OpenAI employees have signed an open letter (endorsed by The Godfathers of AI) addressing and presenting ways to mitigate AI risks.
- Some of the measures include prohibiting AI firms from enforcing NDAs that restrict criticism of the company for risk-related concerns.
- This is in the wake of OpenAI being ousted for reportedly forcing departing employees to sign NDAs or run the risk of losing their vested equity,
In May, a handful of OpenAI employees departed from the company, including super alignment lead Jan Leike, after it unveiled its 'magical' new flagship GPT-4o model at its Spring Update event.
The top executive indicated his departure was fueled by constant disagreements over security, monitoring, and prioritizing shiny products. Consequently, this opened a can of worms for the hot startup, with former OpenAI board members reporting incidents of psychological abuse involving CEO Sam Altman.
There are major concerns around generative AI, including the imminent end of humanity with progressive advancements in the landscape coupled with reports of AI taking over our jobs and turning work into hobbies. Current and former employees at top AI companies, including OpenAI, Anthropic, and DeepMind, have penned a letter addressing some of the risks centered on the technology (via Business Insider).
The letter seeks protection for whistleblowers on issues that may pose imminent danger to humanity:
"We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts."
The letter has been signed by 13 employees from top AI firms and endorsed by self-proclaimed "Godfathers of AI," Yoshua Bengio and Geoffrey Hinton. Daniel Kokotajlo, a former OpenAI employee, indicated that he left the company because he'd lost hope In its values, specifically its responsibility while making AI advances:
"They and others have bought into the 'move fast and break things' approach and that is the opposite of what is needed for technology this powerful and this poorly understood."
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
The AI evangelists highlight four core demands that could potentially address some of the issues and risks riddling the technology, including:
- That the company will not enter into or enforce any agreement that prohibits "disparagement" or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
- That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise;
- That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company's board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
- That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.
This is in the wake of OpenAI reportedly forcing departing employees to sign NDAs, preventing them from criticizing the company or risking losing their vested equity. OpenAI CEO Sam Altman admitted he was embarrassed about the situation but indicated the company never clawed back anyone's vested equity.
While speaking to Business Insider, an OpenAI spokesman indicated that the debate around the technology is important and raises crucial points. As such, OpenAI will work closely with relevant entities to ensure it continues "providing the most capable and safest A.I. systems" to bolster its scientific approach to addressing these risks.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
-
The Werewolf The CTO states that the technology has unlocked productivity, allowing users to get work done faster and more efficiently.
Well, the ones who still have jobs at least...