Should we pause AI development?
As of writing this article, over 2,800 people have signed an open letter asking to pause training powerful AI systems, including big tech founders such as Elon Musk and Steve Wozniak, as well as many AI experts such as Yoshua Bengio, who won a Turing award for deep learning research. Specifically, the letter, titled "Pause Giant AI Experiments," calls for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."
The letter is written by "Future of Life Institute," an organization founded in 2014 that is principally interested in reducing the risk of artificial intelligence (AI). The letter proposes that as AI research is paused, AI researchers should develop a shared protocol for ensuring the safety of AI algorithms and technology, and also work with governments to develop robust AI regulations.
This comes after an AI craze that can be largely attributed to the release of ChatGPT, whose language model (now GPT-4) was later acquired by Microsoft and is being used to fuel a search-engine war between Microsoft and Google. As a result, tech companies are training even more powerful large-language models in order to beat their competitors. The open letter posits that as AI labs continue to train massive models, they are beginning to become human-competitive at general tasks, or in other words, perform basic tasks that we'd expect humans to be able to execute.
While AI experts agree that such AI is risky, there are still many points in the letter that are disputed. For one, many believe that a six-month moratorium is simply not enough. Decision theorist Eliezer Yudkowsky explains that AI alignment researchers have worked for decades to find a way to ensure AI safety, and it is highly unlikely that what has stumped them for so long could be solved over six months. Yudkowsky proposes immediately shutting down all large AI system training indefinitely and internationally, which would require large-scale policy change.
"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die," Yudkowsky asserts in a Time Magazine editorial. "Not as in 'maybe possibly some remote chance,' but as in 'that is the obvious thing that would happen.'"
Implementing any kind of moratorium on AI would likely be difficult, and the open letter certainly does not include many specifics on implementation. As Box CEO Aaron Levie told Axios, "It was just, 'Let's now spend the time to get together and work on this issue.' But it was signed by people that have been working on this issue for the past decade."
"There's a lot of conversation about, 'Let's pull the plug,' but I'm not sure there is a single plug," said Arati Prabhakar, director of the White House Office of Science and Technology Policy, to Axios. The office released a "Blueprint for an AI Bill of Rights" last fall.
Others feel that the letter is focused on the wrong issues. The open letter is most focused on how we should be afraid of models more powerful than GPT-4, largely referring to human-competitive intelligence or intelligence that even surpasses humans. But it's worth acknowledging that currently, artificial intelligence is not even close to being sentient, and not quite human-competitive either. Experts like Yudowsky agree that instances of people seeing AI claim to be sentient are likely AI simply imitating sentience from their training data.
Many AI experts argue we should be more concerned about the risks in our existing AI, such as racial and gender biases, and that the open letter shifts away our focus onto a "Hollywood-esque" future of sentient AI. Shiri Dori-Hacohen, an assistant professor at the University of Connecticut cited in the open letter, disagreed with the language of the open letter. As she told Reuters: "AI does not need to reach human-level intelligence to exacerbate those risks … There are non-existential risks that are really, really important, but don't receive the same kind of Hollywood-level attention."
Margaret Mitchell, who co-founded the ML Fairness group at Google and is also cited in the open letter, echoed similar sentiments in conversation with Reuters: "Ignoring active harms right now is a privilege that some of us don't have."
It's also worth noting that many top OpenAI researchers have not signed the open letter, such as its CEO Sam Altman. (Altman and Elon Musk were co-founders of OpenAI, though Musk stepped down from its board of directors in 2018.) Given that OpenAI has not slowed down their improvements upon ChatGPT and the GPT language models in face of sudden popularity, it seems that many don't agree with taking a pause on AI, or that taking a pause on AI would solve the issue. When they first released ChatGPT to the public, they mentioned how this was essentially a public beta (a research preview, really) in order to better train GPT-3 in a conversation format and also to test ethical issues with the system.
In a statement in August, OpenAI described their plan for AI alignment, the practice of ensuring the safety of AI. Their approach is that for every highly-capable AI they release, they will attempt to align it with human ethical principles, refining their strategies in the process. By pushing alignment ideas as far as possible with our current AI, they believe they will be prepared in the future to handle artificial general intelligence (AGI), AI that can perform tasks and learn from their mistakes the same way humans do ("sentient" AI fall in this category). Part of this will also include training AI to help do AI alignment research.
It seems doubtful that too much will change over the next six months, though the heightened public awareness of AI could fuel new developments in AI alignment research. But no matter whether there is a pause on AI or not, it's highly unlikely that researchers will be able to create a simple or quick solution to AI's ethical dilemmas.