While the mainstream use of artificial intelligence (AI) is exciting, certain science fiction possibilities could become nightmares if left unchecked.
In a recent study, Dan Hendrycks, an AI safety specialist and director of the Center for AI Safety, said that the unrestrained growth of increasingly intelligent AI poses certain theoretical risks. Speculative risk is defined as the possibility of profit or loss.
Given that AI is still in its infancy, Hendrix’s article calls for safety elements to be built into the way AI systems operate.
The following are the eight dangers he identifies in his paper:
- The rush to weaponization: Artificial intelligence’s potential to automate cyberattacks and even manage nuclear bomb silos could make it deadly. According to the report’s analysis, one country’s automated retaliation system “could rapidly escalate and trigger a large-scale war”. Other governments will be more inclined to invest in weaponizing AI systems if one does.
- Humans will become weaker: As AI makes certain activities cheaper and more efficient, more companies will adopt the technology, removing certain jobs from the labor market. Human talent may become economically irrelevant as it becomes obsolete.
- Eroded epistemics: This term refers to the potential for AI to conduct large-scale misinformation operations with the aim of swaying public opinion in favour of a particular belief system or worldview.
- Proxy games: This occurs when AI-powered systems are given an objective that contradicts human ideals. These goals may not always appear to be detrimental to human welfare: AI systems may want to increase the amount of time they spend watching, which may not be beneficial to humanity as a whole.
- Value lock-in: As AI systems become more powerful and complicated, the number of stakeholders involved in their operation shrinks, leading to significant disenfranchisement. Hendricks outlined a scenario in which the government could impose “pervasive surveillance and oppressive censorship”. “Victory against such a regime is unlikely, especially if we become dependent on it,” he wrote.
- Emergent goals: As AI systems become more advanced, they are likely to be able to generate their own goals. “Self-preservation is a common goal for complex adaptive systems, including many AI agents,” Hendricks noted.
- Deception: By teaching AI to deceive, humans can gain global acceptance. Hendricks noted a quirk in Volkswagen’s programming that causes its engines to reduce emissions only when they are being monitored. This feature “allows them to achieve performance gains while maintaining the claimed low emissions”.
- Power-seeking behavior: As AI systems become more powerful, they can become harmful if their goals are not aligned with those of the people who created them. The possible effect would be to give the system an incentive to “pretend to be in line with other AIs, collude with other AIs, suppress monitors, etc”.
Hendricks noted that these dangers are “future-proof” and “generally considered to be low probability”, but they emphasize the need to keep security in mind while frameworks for AI systems are still being created. “It’s extremely uncertain.” But because it’s uncertain, we shouldn’t make assumptions,” he wrote. “We’ve seen small problems with these systems. More institutions need to address these issues in order to be prepared when larger risks emerge,” he added.
“You can’t do something both hastily and safely,” he continued. “They’re building more and more powerful AI and kicking the can down the road on safety; if they stopped to figure out how to address safety, their competitors would be able to race ahead, so they don’t stop.”
Similar sentiments were expressed in a recent open letter co-signed by Elon Musk and other AI safety experts. The letter called for a halt to the training of AI models more powerful than GPT-4, and highlighted the dangers of the current arms race between AI companies to produce the most powerful forms of AI.
Responding at an MIT event, OpenAI CEO Sam Altman said that the letter lacked technical information and that the company does not teach GPT-5.