Edward Snowden labels OpenAI's new board appointment a "willful, calculated betrayal of the rights of every person on Earth"
OpenAI is slowly building its safety team again, but its latest board appointment has stirred a bit of controversy.
What you need to know
- OpenAI recently appointed Paul Nakasone, a retired US Army General and former National Security Administration (NSA), to its board.
- The firm says Nakasone's experience in cybersecurity will help it realize AGI safely and ensure that it benefits all humanity.
- However, a former NSA employee says Nakasone's appointment is a betrayal of everyone's rights and that we should not trust OpenAI's products like ChatGPT.
OpenAI's super alignment took a major jab last month after a handful of top executives left the company for various reasons, including its prioritization of 'shiny products' over safety processes. However, the ChatGPT maker formed a new safety team led by Sam Altman, focusing on upholding technological advances to meet critical safety and security standards.
Recently, the company appointed Paul Nakasone, a retired US Army General and former National Security Administration (NSA), to its board. For context, Nakasone previously served as the head of the cybersecurity-focused Cyber Command unit. It's quite possible that his vast experience and background heavily contributed to his landing a seat on OpenAI's board (via Futurism).
According to OpenAI board chair Bret Taylor:
"General Nakasone's unparalleled experience in areas like cybersecurity, will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity."
As it happens, Nakasone's appointment to OpenAI's Safety and Security Committee doesn't sit well with everyone. A former NSA employee turned 'employee of the public' Edward Snowden has blatantly expressed his concerns about Nakasone's appointment:
"They've gone full mask-off: 𝐝𝐨 𝐧𝐨𝐭 𝐞𝐯𝐞𝐫 trust OpenAI or its products (ChatGPT etc). There is only one reason for appointing an NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned."
They've gone full mask-off: 𝐝𝐨 𝐧𝐨𝐭 𝐞𝐯𝐞𝐫 trust @OpenAI or its products (ChatGPT etc). There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned. https://t.co/bzHcOYvtkoJune 14, 2024
Snowden's sentiments center on the NSA's association with unwarranted surveillance of US citizens. This is amid privacy and security concerns riddling the advancement of AI. For instance, you might recall Microsoft's controversial Windows Recall being recalled even before it shipped due to backlash from concerned users. On paper, Redmond promised a 100% privacy-focused feature that runs on devices.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
However, tech-savvy users managed to identify loopholes that allowed them to bypass the security measures and access the locally stored snapshots of everything a user sees on their computer, prompting the UK's data watchdog to scrutinize the feature's safety.
Privacy and security continue to derail OpenAI's technology advances
Contrary to Snowden's sentiments, Nakasone says OpenAI's mission closely aligns with his personal values and experience in the public service. He plans to use his experience to make artificial general intelligence (AGI) safe and beneficial for everyone. As you may know, rumors swirling the internet indicate OpenAI could turn into a regular company to attract capital.
Matthew Green, a cryptography professor at Johns Hopkins University, shares similar sentiments as Snowden:
"I do think that the biggest application of AI is going to be mass population surveillance so bringing the former head of the NSA into OpenAI has some solid logic behind it."
Admittedly, privacy and security are among the major concerns preventing AI advances from scaling greater heights. If Snowden's sentiments come to fruition, it could hurt people's trust. It's clear that people don't want their personal data to be accessed or used to train AI models. That said, it'll be interesting to see Nakasone's input on the board and how he plans to use his experience to promote safe AI advances.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.