ChatGPT maker OpenAI's former researcher shares AI fears: I'm pretty terrified by ...

Steven Adler, a safety researcher, has left OpenAI, citing concerns over the rapid development of Artificial General Intelligence (AGI) and its potential risks. Adler criticized the competitive race among AI labs and superpowers for AGI, warning it could lead to dangerous consequences. His departure adds to the growing scrutiny surrounding AI's progression.
ChatGPT maker OpenAI's former researcher shares AI fears: I'm pretty terrified by ...
Another safety researcher has left OpenAI, expressing concerns about the escalating race towards Artificial General Intelligence (AGI). Steven Adler, who spent four years at the leading AI research company, announced his departure via a post on X (formerly Twitter).
He did not elaborate on the specific reasons for his departure from OpenAI, but his post highlights growing anxieties within the AI community about the potential risks associated with increasingly powerful AI systems.

Some personal news: After four years working on safety across @openai, I left in mid-November. It was a wild ride with lots of chapters - dangerous capability evals, agent safety/control, AGI and online identity, etc. - and I'll miss many parts of it.

‘Terrified by pace of AI development’


In his posts, Adler said that he is terrified with the pace of AI development.
Honestly I'm pretty terrified by the pace of AI development these days. When I think about where I'll raise a future family, or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?

He also criticised the competitive environment surrounding AGI development, suggesting that the rush to achieve AGI between leading AI labs and global superpowers could have unintended consequences.
IMO, an AGI race is a very risky gamble, with huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.

Today, it seems like we're stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.

Adler's departure comes at a time of heightened scrutiny for the AI field, with the recent emergence of Chinese AI startup DeepSeek further intensifying the global competition in AI development.
Last year, Suchir Balaji, a 26-year-old former OpenAI researcher, was found dead at his apartment. He accused the company of violating copyright laws.
author
About the Author
TOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article

Latest Mobiles

FOLLOW US ON SOCIAL MEDIA