Geoffrey Hinton, the renowned "Godfather of AI," has publicly expressed concerns about
Sam Altman's leadership of OpenAI, criticizing Altman's focus on profits over safety. Hinton,
Nobel Prize laureate in Physics, praised his former student, Ilya Sutskever, for playing a key role in Altman's temporary removal from
OpenAI in November 2023.
Why Geoffrey Hinton dislikes Sam Altman
Hinton's concerns stem from his long-standing commitment to the ethical development of AI. In 2009, he demonstrated the potential of Nvidia's CUDA platform by training a neural network to recognize human speech, a breakthrough that contributed to the widespread use of GPUs in AI research. His research group at the University of Toronto continued to push the boundaries of machine learning, developing a neural network in 2012 that could identify everyday objects from images. This validated the use of GPUs in AI and spurred widespread adoption of neural networks powered by GPUs.
Sutskever, a co-founder and chief scientist at OpenAI, played a pivotal role in shaping the organization's most advanced AI models. However, after OpenAI's board ousted Altman as CEO in late 2023, Sutskever initially supported the decision but later regretted it and joined others in advocating for Altman's reinstatement. Sutskever eventually left OpenAI in May 2024 to start his own AI venture, Safe Superintelligence Inc.
Hinton, who supervised Sutskever during his Ph.D., reflected on OpenAI's original mission, which focused on ensuring the safety of artificial general intelligence (AGI). Over time, he observed a shift under Altman's leadership towards a profit-driven approach, a change Hinton views as detrimental to the organization's core principles.
'Godfather of AI' has warned of catastrophic impact of AI on humanity if left unchecked
Beyond his critique of OpenAI, Hinton has long warned about the dangers AI poses to society. He has expressed concerns that AI systems, by learning from vast amounts of digital text and media, could become more adept at manipulating humans than many realize. Initially, Hinton believed that AI systems were far inferior to the human brain in terms of understanding language, but as these systems began processing larger datasets, he reconsidered his stance. Now, Hinton believes AI may be surpassing human intelligence in some respects, which he finds deeply unsettling.
As AI technology rapidly advances, Hinton fears the implications for society. He has warned that the internet could soon be flooded with AI-generated false information, making it difficult for people to discern what is real. He is also concerned about AI's potential impact on the job market, suggesting that while chatbots like ChatGPT currently complement human workers, they could eventually replace roles such as paralegals, personal assistants, and translators.
Hinton's greatest concern lies in the long-term risks AI poses, particularly the possibility of AI systems exhibiting unexpected behavior as they process and analyze vast amounts of data. He has expressed fears that autonomous AI systems could be developed to run their own code, potentially leading to the creation of autonomous weapons, or "killer robots." Once dismissing such risks as distant, Hinton now believes they are much closer than previously thought, estimating they could materialize within the next few decades.
Other experts, including many of Hinton's students and colleagues, have described these concerns as hypothetical. Nonetheless, Hinton is worried that the current competition between tech giants like Google and Microsoft could spiral into a global AI arms race, one that would be difficult to regulate. Unlike nuclear weapons, AI research can easily be conducted in secret, making regulation and oversight much harder. Hinton believes that the best hope for mitigating these risks lies in collaboration among the world's top scientists to devise methods of controlling AI. Until then, he argues, further development of these systems should be paused.
Hinton's concerns about Altman's leadership are not unique. Elon Musk, another co-founder of OpenAI, has been a prominent critic of Altman, particularly regarding OpenAI's transition from a nonprofit to a for-profit organization. Musk has repeatedly pointed out that this shift runs counter to the company's original purpose of being an open-source, nonprofit initiative to counterbalance other tech giants.
As the AI race continues, Hinton's warnings underscore the growing divide between technological advancement and ethical responsibility, with OpenAI and its leadership at the center of this tension.