Vitalik Buterin proposes a "global soft pause button" to cut AI computing power by 90-99% for 1-2 years — giving ample time to prepare for potential existential doom
Ethereum's co-founder recommends a soft pause to establish control over the rapid advancement of AI and potential catastrophic harm.
Despite reports claiming major tech AI labs, including OpenAI, Anthropic, and Google, are struggling to develop advanced AI systems due to scaling laws prompted by a lack of high-quality content for model training, generative AI continues to scale greater heights. OpenAI CEO Sam Altman recently indicated that AGI (artificial general intelligence) might be achieved sooner than anticipated, and superintelligence is only "a few thousand days away."
Aside from privacy and security concerns around AI, most users have expressed their reservations about the technology as it could potentially lead to existential doom. According to an AI safety researcher and director of the Cyber Security Laboratory at the University of Louisville, Roman Yampolskiy, there's a 99.999999% probability AI will end humanity. The researcher claimed the only way to avoid the outcome is not to build AI in the first place.
While there's a critical need for guardrails and regulations to prevent AI from veering off the rails and spiraling out of control, Vitalik Buterin, Ethereum co-founder, proposes a "global soft pause button" on global hardware to prevent the tech from taking over humanity.
According to Buterin:
"The goal would be to have the capability to reduce worldwide available compute by ~90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare. The value of 1-2 years should not be overstated: a year of "wartime mode" can easily be worth a hundred years of work under conditions of complacency. Ways to implement a "pause" have been explored, including concrete proposals like requiring registration and verifying location of hardware."
The Canadian computer programmer says a clever cryptographic trickery would serve as an advanced approach to address AI risks. He proposes that industrial-scale AI hardware should be fitted with a trusted chip that would only continue running if it gets up to three signatures every week from major international bodies, including one non-military party.
"The signatures would be device-independent (if desired, we could even require a zero-knowledge proof that they were published on a blockchain), so it would be all-or-nothing: there would be no practical way to authorize one device to keep running without authorizing all other devices," Buterin added. He indicated that the constant need to get online every week for a signature would help discourage extending the scheme to consumer hardware.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
OpenAI CEO Sam Altman claimed AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity. Interestingly, he claimed the safety concerns expressed won't manifest at the coveted AGI moment as it will whoosh by with "surprisingly little" societal impact. However, the executive says AI should be regulated like an airplane by an international agency to ensure the safety testing of these advances.
Buterin touts the approach for several reasons, including its capability to slow down the transition if it shows early signs of catastrophic damage and negligible impact on developers.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.
-
fjtorres5591 ...and to give trailing edge tech companies a chance to catch up.Reply
Riiighhttt...
Anybody remember Musk asking for a six month pause?
(Which he didn't get.)
Six months later he had launched Xai, bought a ton of GPUs and built a data Center to train and run GROK, his chatbot app.
Can we say "self-serving" girls and boys?
Seriously. This passes for news? -
adichiru First of all it is not possible, just impossible to implement. Secondly it is a bad move because it will solve nothing but create more tensions and disrupt international relations on other domains. Also the only way to be as safe as possible is to start as many AI companies that build models as possible and insist on open source. That's all ...Reply -
fjtorres5591
Exactly.adichiru said:First of all it is not possible, just impossible to implement. Secondly it is a bad move because it will solve nothing but create more tensions and disrupt international relations on other domains. Also the only way to be as safe as possible is to start as many AI companies that build models as possible and insist on open source. That's all ...
The pundits know this, they're not stupid.
Just self-serving.
It is us they think are stupid.