Over a thousand leaders and experts in the field of technology, including notable figures such as Tesla’s Elon Musk and Apple’s Steve Wozniak, have jointly signed an open letter to urge artificial intelligence laboratories to temporarily halt the development of highly advanced AI systems. The letter highlights the potential profound risks that such AI systems may pose to both humanity and society at large.
This call for caution comes in the wake of OpenAI’s recent release of their most advanced AI system yet, the GPT-4. This system has already led researchers to adjust their expectations for when AGI (artificial general intelligence), which refers to AI systems that surpass human cognitive ability, will be developed. Furthermore, AI powers chatbots like Microsoft’s Bing and Google’s Bard. These chatbots are capable of conducting human-like conversations, generating essays on a vast range of topics, and performing more complex tasks, such as writing computer code.
According to the letter, which the nonprofit Future of Life Institute released on Wednesday, “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” The organization is devoted to studying potential threats to humanity’s existence and has been vocal about the potential dangers posed by artificial intelligence for some time.
For several years now, a significant number of AI researchers, academics, and tech executives, including Elon Musk, have expressed concerns regarding the potential harm that AI systems could cause. It is worth noting that Musk was one of the co-founders of OpenAI, the company that recently released the powerful GPT-4 AI system. However, he parted ways with the company back in 2018.
“Contemporary AI systems are now becoming human-competitive at general tasks,” the letter states and adds, “We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”
In the open letter, the signatories called for a six-month moratorium on the development of AI systems that are more advanced than GPT-4, to allow for further research into ensuring their safety. “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it says. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Those behind the letter note that they are not calling for AI development, in general, to be paused, but rather “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.
While some of the individuals who signed the letter are known for repeatedly expressing concerns about the potential for AI to destroy humanity, others are more focused on the near-term risks that these systems pose. These include the spread of disinformation and the potential for people to rely too heavily on AI systems for medical and emotional advice.
It is important to note that the letter was signed by prominent figures in the field of artificial intelligence, including Yoshua Bengio, who is considered a pioneer of the deep learning AI approach, Stuart Russell, a leading researcher at UC Berkeley’s Center for Human-Compatible AI, and Victoria Krakovna, a research scientist at DeepMind.
The fact that these individuals, who possess extensive knowledge and expertise in the field, are warning about the dangers of advanced AI systems underscores the need for caution in their development and deployment. Society is not yet ready to handle the potential consequences that could arise from the use of these systems.
Moreover, there are indications that government regulations may soon be introduced to address these concerns. A policy could be passed as early as this year that would require companies to conduct risk assessments of AI technologies to evaluate how their applications could impact health, safety, and individual rights.
“Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”
Despite the concerns raised in the letter, it is unlikely to have an immediate impact on the current climate in AI research. Tech companies such as Google and Microsoft have been rushing to deploy new AI products, often with a “ship it now and fix it later” approach that has sidelined previously avowed concerns over safety and ethics.
As the potential risks of AI become increasingly apparent, more needs to be done to ensure that these systems are developed and deployed responsibly. This will require collaboration between researchers, policymakers, and the public to establish a framework that promotes the safe and ethical use of AI.