Sam Altman claims superintelligence might only be "a few thousand days" away from OpenAI's doorstep, but there are a lot of details to figure out
OpenAI CEO says superintelligent AI might be "a few thousand days" from the AI firm's grasp.
What you need to know
- OpenAI CEO Sam Altman says superintelligence might only be "a few thousand days" away.
- Altman claims that AI will get better with scale and will help make meaningful improvements to the lives of people around the world, including fixing the environment, but a lot still needs to be done.
- A former OpenAI researcher warned that the firm wouldn't know how to handle AGi and all it entails.
With the rapid prevalence of AI and the broad adoption of its tools, there has been a growing concern about job security, the possible extinction of humanity, and other critical issues. Close watchers of the technology like Elon Musk claim the world is on the verge of the biggest technological breakthrough with AI, However, there might not be enough electricity to power its advances.
We recently learned that AI tools like Copilot and ChatGPT consume large amounts of water for cooling — up to 3 water bottles to generate a mere 100 words. Despite the highlighted issues, OpenAI CEO Sam Altman has expressed his aspirations for achieving artificial general intelligence (AGI). However, a former researcher for the ChatGPT maker warned that the AI firm wouldn't know how to handle all that it entails.
In a new blog dubbed The Intelligence Age, Sam Altman indicates:
"It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there."
The CEO used the rapid advancement of deep learning and its power in general as the basis for his claims. "That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)," added Altman. "To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems."
In the past few months, top officials at the firm have made strong claims about the progression of their AI models and products. While at the 27th annual Milken Institute Global Conference, OpenAI COO Brad Lightcap indicated:
"In the next couple of 12 months, I think the systems that we use today will be laughably bad. We think we're going to move toward a world where they're much more capable."
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
On another account, Sam Altman indicated:
He even promised with a high scientific degree that OpenAI's GPT-5 model would be smarter than GPT-4, which he admitted: "kind of sucks."
Superintelligence is no walk in the park
While OpenAI CEO Sam Altman is focused on achieving superintelligence, he admits it's going to be an uphill task:
"There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems."
And despite the challenges, Altman categorically indicates that AI will get better with scale. Consequently, it will "lead to meaningful improvements to the lives of people around the world," including fixing the climate, the discovery of all of physics, and more.
This news comes after a handful of OpenAI's super alignment team recently departed from the AI firm, including co-founder and Chief Scientist Ilya Sutskever, who announced he was departing from the firm to focus on a project that was "personally meaningful" — Superintelligence Inc.
OpenAI has been accused of prioritizing shiny products over safety processes, with former employees even referring to it as the "Titanic of AI." Moreover, the ChatGPT maker reportedly sent invitations to GPT-4o's launch before testing took place, pressuring the safety team to scheme through the entire process to meet the assigned deadline in under one week.
Elsewhere, OpenAI was recently dragged back to court by Elon Musk over a stark betrayal of its founding mission and alleged racketeering activities. It remains unseen how OpenAI plans to handle the wide array of issues as it edges closer to superintelligence.
🎃The best early Black Friday deals🦃
- 🕹️Xbox Game Pass Ultimate (3-months) | $31.59 at CDKeys (Save $17!)
- 💻Samsung Galaxy Book4 Edge (X Elite) | $899.99 at Best Buy (Save $450!)
- 🎮Razer Wolverine V2 Chroma (Xbox & PC) | $99.99 at Amazon (Save $50!)
- 🕹️Starfield Premium Upgrade (Xbox & PC) | $27.69 at CDKeys (Save $7!)
- 💻ASUS Vivobook S 15 (X Elite) | $955 at Amazon (Save $345!)
- 🕹️Final Fantasy XVI (PC, Steam) | $43.79 at CDKeys (Save $6!)
- 💻Lenovo ThinkPad X1 Carbon | $1,481.48 at Lenovo (Save $1,368!)
- 🎮 Seagate Xbox Series X|S Card (2TB) | $249.99 at Best Buy (Save $110!)
- 🕹️Hi-Fi RUSH (PC, Steam) | $8.49 at CDKeys (Save $22!)
- 💻HP Victus 15.6 (RTX 4050) | $599 at Walmart (Save $380!)
- 🫙Seagate HDD Starfield Edition (2TB) | $79.99 at Best Buy (Save $30!)
- 🖱️Razer Basilisk V3 Wired Mouse | $44.99 at Best Buy (Save $25!)
- 🕹️Days Gone (PC, Steam) | $10.19 at CDKeys (Save $39!)
- 🖥️Lenovo ThinkStation P3 (Core i5 vPro) | $879.00 at Lenovo (Save $880!)
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
-
Ron-F Sounds delusional. Anyway, what happens to a company where the CEO promises the most revolutionary product ever and doesn't deliver? This might place OpenAI in a bad place in five years.Reply -
Skettalee Yall are stupid for with the types of things you guys report on. Yall are just reporting on the trolling remarks this business made and they dont even know what they are talking about plus the amount of time yall are talking about is probably close to 10 years away and you guys try to cause some panic in people saying this stuff is just "a few" thoasand days away. I know you guys really dont feel good about the types of topics yall speak about and especially saying things in your posts like "WHAT YOU NEED TO KNOW" like your company even has any idea on what people really "NEED TO KNOW". I would be so ashamed of myself if i was putting the types of content across the internet that yall are.Reply -
FraJa Microsoft Research has a department "AI Frontiers": Microsoft Research: AI FrontiersMS Research is the the largest private scientific institution in the field of computing (in a very broad way) and many other domains...Reply
They have a podcast, very interesting, they are scientists, and they see things in a different way that in "the news" or by a commercial guy, let's not mention Reddit or "in the hills".
One scientist said : Podcast : AI Frontiers: the physics of AI
Sébastien Bubeck Why is not AGI? Because it’s still lacking some of the fundamental aspects, two of them, which are really, really important. One is memory. So, every new session with GPT-4 is a completely fresh tabula rasa session. It’s not remembering what you did yesterday with it. And it’s something which is emotionally hard to take because you kind of develop a relationship with the system.
As crazy as it sounds, that’s really what happens. And so you’re kind of disappointed that it doesn’t remember all the good times that you guys had together. So this is one aspect. The other one is the learning. Right now, you cannot teach it new concepts very easily. You can turn the big crank of retraining the model.
(...)
Absolutely. Maybe one other point that I want to bring up about AGI, which I think is confusing a lot of people. Somehow when people hear general intelligence, they want something which is truly general that could grapple with any kind of environment. And not only that, but maybe that grapples with any kind of environment and does so in a sort of optimal way.
This universality and optimality, I think, are completely irrelevant to intelligence. Intelligence has nothing to do with universality or optimality. We as human beings are notoriously not universal. I mean, you change a little bit the condition of your environment, and you’re going to be very confused for a week. It’s going to take you months to adapt.
So, we are very, very far from universal and I think I don’t need to tell anybody that we’re very far from being optimal. The number of crazy decisions that we make every second is astounding. So, we’re not optimal in any way. So, I think it is not realistic to try to have an AGI that would be universal and optimal. And it’s not even desirable in any way, in my opinion. So that’s maybe not achievable and not even realistic, in my opinion.
So, AGI in this definition is the ability to remember and learn, not to take over the world and "Kill all humans" 😉