Leopold Aschenbrenner, a former researcher on OpenAI’s superalignment team, has ignited a firestorm in the artificial intelligence community with the release of a 165-page essay series titled “Situational Awareness.” In the detailed publication, Aschenbrenner alleges that he was terminated from the company in April for raising critical security concerns about the protection of its advanced AI models.
Aschenbrenner, who was aligned with the “effective altruism” movement, claims he shared a memo with OpenAI’s board detailing security protocols he believed were “egregiously insufficient” to protect the company’s core AI research from theft by foreign state actors, particularly China. He argues that the race to build artificial general intelligence (AGI) is a matter of national security and that a rival nation stealing key algorithmic secrets could have disastrous consequences. His firing, he suggests, was a direct result of pushing these internal warnings.
OpenAI has publicly stated that Aschenbrenner’s dismissal was not related to his safety concerns but was based on security issues, including allegations that he leaked confidential information. This public dispute comes at a sensitive time for the AI leader, following the recent high-profile departures of co-founder Ilya Sutskever and safety lead Jan Leike, who both voiced concerns over the company’s prioritization of product development over safety.
Aschenbrenner’s essays go beyond his termination, making the provocative prediction that AGI could be achieved as early as 2027. He warns that the world is “sleepwalking” into a future dominated by superintelligent systems without adequate preparation or security. His claims have intensified the debate over the internal culture at leading AI labs and whether the immense pressure to commercialize and compete is compromising long-term safety and security protocols. The incident underscores the growing tension between rapid innovation and the profound risks associated with developing transformative AI.


