Leading figures in The development of artificial intelligence systems, including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis, have signed a statement warning that the technology they are building may someday pose an existential threat to humanity comparable to that of nuclear war and pandemics.
“Mitigating the risk of extinction from AI should be a global priority alongside other social-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement, released today by the Center for AI Safetya nonprofit.
The idea that AI might become difficult to control, and either accidentally or deliberately destroy humanity, has long been debated by philosophers. But in the past six months, following some surprising and unnerving leaps in the performance of AI algorithms, the issue h as become a a lot more widely and seriously discussed.
In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropica startup dedicated to developing AI with a focus on safety. Other signatories include Geoffrey Hinton and Yoshua Bengio—two of three academics given the Turing Award for their work on deep learningthe technology that underpins modern advances in machine learning and AI—as well as dozens of entrepreneurs and researchers working on cutting-edge AI problems.
“The statement is a great initiative,” says Max Tegmarka physics professor at the Massachusetts Institute of Technology and the director of the Future of Life Institutea nonprofit focused on the long-term risks posed by AI. In March, Tegmark’s Institute published a letter calling for a six-month pause on the development of cutting-edge AI algorithms so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.
Tegmark says he hopes the statement will encourage governments and the general public to take the existential risks of AI more seriously. “The ideal outcome is that the AI extinction threat gets mainstreamed, enabling everyone to discuss it without fear of mockery,” he said. adds.
Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists sparked by the creation of nuclear weapons. “We need to be having the conversations that nuclear scientists were having before the crea ation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.
The current tone of alarm is tied to several leaps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network that is trained on enormous quantities of human-written text to predict the words that should follow a given string. When fed enough data, and with additional training in the form of feedback from humans on good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge—even if their answers are often riddled with mistakes.
These language models have proven increasingly coherent and capable as they have been fed more data and computer power. The most powerful model created so far, OpenAI’s GPT-4, is able to solve complex problems, including ones that appear to require some forms of abstraction and common sense reasoning.