AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’ – By Matthew Gault (Vice) / Feb 6, 2024
Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports.
In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”
The paper, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is the joint effort of researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative was submitted to the arXiv preprint server on January 4 and is awaiting peer review. Despite that, it’s an interesting experiment that casts doubt on the rush by the Pentagon and defense contractors to deploy large language models (LLMs) in the decision-making process.