Connect with us

Hi, what are you looking for?

Science & Technology

Hail SkyNet! Let Terminator come: Musk and Wozniak called for the suspension of AI system training more powerful than GPT-4

Hail SkyNet! Let Terminator come: Musk and Wozniak called for the suspension of AI system training more powerful than GPT-4 1
Photo: Warner Bros.

Future of Life, a non-profit organization, published a letter in which SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, philanthropist Andrew Yang, and about a thousand other artificial intelligence researchers called for an “immediate suspension” of the training of AI systems “more powerful than GPT-4.”

The letter states that artificial intelligence systems with “human-competitive intelligence” may carry “serious risks to society and humanity.” It calls for laboratories to suspend training for six months.

“Powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and their risks will be manageable,” the authors of the letter emphasize.

More than 1,125 people signed the petition, including Pinterest co-founder Evan Sharp, Ripple co-founder Chris Larsen, Stability AI CEO Emad Mostak, and researchers from DeepMind, Harvard, Oxford and Cambridge. The letter was also signed by AI heavyweights Yoshua Bengio and Stuart Russell. The latter also called for the development of advanced AI to be suspended until independent experts develop, implement and test common security protocols for such AI systems.

The letter details the potential risks to society and civilization from competitive AI systems in the form of economic and political upheavals.

Here is the full text of the letter:

“Artificial intelligence systems that compete with humans can pose a serious danger to society and humanity, as extensive research has shown and as recognized by leading artificial intelligence laboratories. As stated in the widely endorsed Asilomar AI principles, advanced AI can trigger profound changes in the history of life on Earth and should be designed and managed with commensurate care and resources. Unfortunately, this level of planning and management does not exist, even though in recent months AI labs have been mired in an uncontrolled race to develop and deploy ever more powerful “digital minds”, whose performance cannot be predicted or reliably controlled, that no one – not even their creators – can understand.

Modern artificial intelligence systems are becoming competitive in solving common problems, and we must ask ourselves: should we allow machines to flood our information channels with propaganda and fakes? Should we automate all jobs, including those that are potentially being replaced by AI? Should we develop “non-human minds” that could eventually outnumber us, outwit us, and replace us? Should we risk losing control of our civilization?

Such decisions should not be delegated to unelected technology leaders. Powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and the risks manageable. This confidence should be well founded and only grow with the expansion of the potential effects of such systems. A recent OpenAI statement regarding artificial general intelligence states that “at some point, it may be important to obtain an independent assessment before embarking on training future systems, and for the most advanced efforts, agree to limit the growth rate of the calculations used to create new models.” We agree. That moment has come now.

Therefore, we call on all AI labs to immediately suspend training on AI systems more powerful than GPT-4 for at least six months. This pause should be universal and controlled, and all key participants should be involved in it. If such a pause cannot be established quickly, then governments should intervene and impose a moratorium.

AI Labs and independent experts should use this pause to jointly develop and implement a set of common security protocols for advanced AI design and development that will be carefully reviewed and monitored by independent external experts. These protocols are to ensure that systems that adhere to them are safe beyond reasonable doubt. This does not mean a pause in the development of AI in general, but simply a step back from a dangerous race towards ever larger unpredictable “black box” models with more capabilities.

Advertisement. Scroll to continue reading.

AI research and development must be refocused to make today’s powerful state-of-the-art AI systems more accurate, secure, interpretable, transparent, reliable, consistent, trustworthy and loyal.

In parallel, AI developers must work with policies to greatly accelerate the development of robust AI control systems. These efforts should, at a minimum, include: new and capable AI regulators; supervision and monitoring of high-performance artificial intelligence systems and large pools of computing power; verification and watermarking systems that help distinguish real content from generated content and track model leaks; a robust audit and certification ecosystem; liability for harm caused by AI; strong public funding for technical research into AI security; well-resourced institutions to deal with the severe economic and political upheavals (especially in democracies) that AI will cause.

Humanity will be able to enjoy a prosperous future with AI. Having succeeded in creating powerful systems, we will come to the “AI summer” when we reap the benefits, develop these systems for the benefit of all and give society the opportunity to adapt to them. Society has suspended the use of other technologies, with potentially catastrophic consequences. We can apply this measure here as well.”

Major artificial intelligence labs such as OpenAI have not yet responded to the letter.

“The letter is not perfect, but its spirit is right: we need to slow down until we better understand all the risks,” said Gary Marcus, professor emeritus at New York University, who signed the letter. “AI systems can cause serious harm… The big players are becoming more and more secretive about what they are doing.”

OpenAI introduced GPT-4 on March 14th. The model is able to interpret not only text, but also images. She also now recognizes sketchy images, including hand-drawn ones.

After the announcement of the new language model, OpenAI refused to publish the research materials underlying it. Members of the AI ​​community criticized the decision, noting that it undermines the company’s spirit as a research organization and makes it difficult for others to replicate its work. At the same time, this makes it difficult to develop means of protection against threats posed by AI systems.

Hail SkyNet! Let the Terminator come!

Comments

You May Also Like

Science & Technology

The Reuters agency went ahead in a very alarming revelation, by publishing a letter from OpenAI researchers, in which they warn about the creation...

Science & Technology

A couple of days ago, a bombshell announcement was “exploded” on the OpenAI (today the absolute leader in GenAI, which made ChatGPT, DALL-E 3...

Science & Technology

ChatBot When OpenAI, the AI research organization founded by Elon Musk, created a text-generating AI system called GPT-2, the group said it was too...

Advertisement