Areeba Rashid

AI Superintelligence Risk: Vitalik Buterin’s Proposal to Slow Down Progress

AI, AI Hardware, AI Risks, AI Superintelligence

AI Superintelligence
  • Vitalik Buterin suggests reducing global computing power to slow AI development and manage superintelligence risks.
  • AI superintelligence could emerge in five years, Buterin warns, urging stronger regulation to prevent harmful consequences.
  • To curb AI risks, Buterin proposes tracking AI hardware and requiring international certification for a global paus

Ethereum creator Vitalik Buterin has proposed that the world should slow down AI development to prevent what he believes is a impending threat from AI. On January 5, in a blog post, Buterin came back to the idea of the “defensive accelerationism” (d/acc), suggesting that a so-called “soft brake” on the industrial-scale computational resources might slow down the development of AI and thereby buy the humanity some time to get ready for the appearance of the superintelligent AI.

Global AI Superintelligence and Solutions

At the Ethereum Classic Summit, Buterin said that AI superintelligence could materialize in five years, and it would be smarter than humans across the board. Some of the consequences of these risks are still unknown, but Buterin is convinced that they could be disastrous if there is no proper regulation in AI development.

To solve these problems, Buterin proposed cutting the overall computational capacity by 99% for one to two years. This entails stopping the provision of large scale computing platforms, which he propounds is an important way of slowing down AI progress and giving society more time to deal with risks.

In the dictionary of experts, Superintelligent AI is a theoretical concept of an AI that is much advanced than human beings in all aspects. Others in the tech world have expressed similar opinions. In March 2023, more than 2,600 academics and corporate leaders signed a public petition to stop AI to avoid threatening people and the world.

Tracking AI Hardware for Safety

When Buterin first proposed d/acc in 2023, he invited contributors to work on how to safely curtail certain types of AI; he now presents a proposal. He concedes that hardware constraints could be useful in countering high AI risk, but only if weaker measures like rules of liability for AI producers are insufficient.

Buterin’s plan also involves tracking AI hardware chips which may have to be reported to international regulators. He also offers a model by which AI hardware could only work if it undergoes a weekly check and certification from international organizations. This would guarantee that no device is running without the other in order to implement the general ‘pause’ of AI advancement.

However, the approach that Buterin is following is the philosophy of effective accelerationism (e/acc) as the analyst defines it as accelerating technology as fast as possible. Buterin’s vision is rather more muted, and suggests that while innovation must be pursued, it should be done so cautiously, given the risks posed by AI.

Areeba Rashid

Areeba Rashid