Former OpenAI Chief Scientist starts new AI firm with a keen focus on building safe superintelligence over its rival's priority on 'shiny products'
There's a new AI kid on the block that will give OpenAI a run for its money with safe superintelligence.
What you need to know
- Former OpenAI Chief Scientist Ilya Sutskever is starting a new company dubbed Safe Superintelligence Inc.
- The new company will focus on building safe superintelligence, while OpenAI's safety processes and culture take a backseat while it prioritizes shiny products.
- Privacy and security remain critical issues with the evolution of AI and the emergence of tools like Microsoft's controversial Windows Recall feature.
OpenAI and its CEO Sam Altman have been hitting headlines hard in the past few months for varied reasons, including its 'magical' flagship GPT-4o model with reasoning capabilities, the disbandment of OpenAI's super alignment team, etc.
A handful of staffers departed from OpenAI last month, including Co-founder and Chief Scientist Ilya Sutskever. While announcing his departure from the AI startup, Sutskever indicated that after a decade at the company, he was leaving to focus on a project that was "personally meaningful."
Details about the personally meaningful project remained a mystery until now. Sutskever disclosed he is starting a new company dubbed Safe Superintelligence Inc. (SSI). The company will mainly focus on building safe superintelligence, which remains a critical issue in the new age of AI.
I am starting a new company: https://t.co/BG3K3SI3A1June 19, 2024
According to Sutskever:
"We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.
We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.
Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Safe Superintelligence Inc. could be OpenAI's biggest nightmare
OpenAI has been in the spotlight for the wrong reasons, mostly. It's no secret that OpenAI and Sam Altman aim for superintelligence, but at what cost? Jan Leike, former Head of alignment, super alignment lead, and executive also left the OpenAI around the same period as Sutskever.
Leike indicated that he left the company after having multiple disagreements with top executives over its core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and more.
While Leike joined the company with the thought that it was the best place in the world to research, it seemingly turned a blind eye to safety processes and culture to prioritize the development of shiny products. "Building smarter-than-human machines is an inherently dangerous endeavor," Leike added. "OpenAI is shouldering an enormous responsibility on behalf of all of humanity."
Predictions indicate AI might be smarter than humans by the end of 2026, while other reports state that the technology will eventually take over all jobs and render work a hobby. What happens if we have technology that is smarter than humans? Could this potentially mean it might spiral out of control and take over humanity and the world?
Whether AI is a fad remains debatable, but one thing is certain. It's rapidly growing and being widely adopted across the world. Even NVIDIA CEO Jensen Huang says we might be on the verge of reaching the next phase of AI with self-driving cars and humanoid robots at the forefront.
If Sutskever and Safe Superintelligence Inc. can deliver safe superintelligence, it could give OpenAI a run for its money. Privacy and security are the main issues preventing AI from taking off.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.