’s former CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks, has cautioned against a government initiative to develop artificial intelligence systems with superhuman capabilities, known as AGI (Artificial General Intelligence). The trio called it “Manhattan Project” in their policy paper and said that this may provoke severe retaliation from China potentially including cyberattacks that might destabilise international relations.
“[A] Manhattan Project [for AGI] assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it,” the authors write in their paper “Superintelligence Strategy.”
“What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure,” they said.
What is Manhattan Project
This publication emerges just months after a US congressional commission proposed a “Manhattan Project-style” program for AGI development, modeled after America’s 1940s atomic bomb initiative.
The Manhattan Project was a top-secret research and development undertaking during World War II that produced the first nuclear weapons. It was led by the US and its primary objective was to develop and build atomic bombs before Nazi Germany could.
The project culminated in the development and detonation of the first atomic bombs, which were used in the bombings of Hiroshima and Nagasaki, Japan, in August 1945.
"Given the stakes, superintelligence is inescapably a matter of national security, and an effective superintelligence strategy should draw from a long history of national security policy," the authors said.
Tech industry leaders’ warning for US government on AGI initiative
The paper challenges the growing consensus among American policy and industry leaders that a government-backed AGI program represents the optimal strategy for competing with China. Instead, Schmidt, Wang, and Hendrycks suggest the US faces an AGI standoff similar to mutually assured destruction in nuclear strategy.
They argue that just as global powers avoid monopolising nuclear weapons to prevent preemptive strikes, the US should exercise caution in racing toward AGI dominance.
The authors introduce the concept of “Mutual Assured AI Malfunction” (MAIM), proposing that governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponise AGI.
They recommend that the US shift focus from “winning the race to superintelligence” toward developing deterrents against other countries creating superintelligent AI, including expanding cyberattack capabilities to disable threatening projects and restricting access to advanced AI chips and open-source models.