**Call for a Prohibition on the Advancement of AI Superintelligence**
In a notable step that mirrors escalating apprehensions about the consequences of sophisticated artificial intelligence, Steve Wozniak, co-founder of Apple, has aligned with over a thousand notable individuals in pushing for a temporary halt on the progress of AI superintelligence. This movement boasts distinguished signatories, including Nobel Prize winners, AI innovators, and leading technology personalities, all voicing their worries regarding the swift evolution of AI technologies.
The declaration, available on the [Superintelligence Statement website](https://superintelligence-statement.org), conveys the dual aspects of AI instruments. While they offer unparalleled breakthroughs in health and wealth, the quest for superintelligence—AI capable of surpassing humans in nearly all cognitive endeavors—raises significant issues. These issues include economic disruption, the undermining of civil liberties, threats to national safety, and even risks to human existence.
The essence of the statement demands a ban on the pursuit of superintelligence until two essential criteria are fulfilled:
1. There is a widespread scientific agreement that such development can be done safely and under control.
2. There is robust public backing for the effort.
Notable figures who have endorsed the statement include:
– Geoffrey Hinton, acknowledged as the “father of deep learning.”
– Yoshua Bengio, a seminal AI researcher.
– Stuart Russell, a professor at UC Berkeley and a specialist in AI safety.
– Nobel laureates Frank Wilczek and John C. Mather.
– Beatrice Fihn and Daron Acemoğlu, also Nobel laureates.
– Susan Rice, former U.S. National Security Adviser.
These specialists have previously cautioned that the advent of artificial general intelligence (AGI) might represent a threat to humanity akin to that of nuclear conflict or global epidemics. Their united stance emphasizes the immediacy of tackling the ethical and safety ramifications of AI developments before they escalate beyond control.
As discussions surrounding AI proceed to develop, the appeal for a careful approach to superintelligence progression stresses the necessity for a balanced viewpoint that prioritizes safety and public interest in light of rapid technological advancements.