The leadership architecture at OpenAI continues to undergo a significant transformation as Miles Brundage, the organization’s senior advisor for the readiness of artificial general intelligence, announced his departure. This move marks another high-profile exit for the San Francisco-based firm, which has seen several foundational members and safety-oriented executives transition out of the company over the last twelve months. Brundage, who led the AGI Readiness team, played a pivotal role in assessing how prepared the world is for the advent of human-level machine intelligence.
In a detailed public statement, Brundage explained his decision to move into the independent research and policy sector. He noted that while OpenAI remains a leader in the field, he believes he can be more impactful by providing an outside perspective on the ethical and regulatory challenges facing the industry. This sentiment reflects a growing trend among AI researchers who feel that the rapid commercialization of the technology requires more robust, third-party oversight that is not tethered to corporate profit motives.
OpenAI has recently transitioned its internal structure to move away from some of its non-profit roots toward a more traditional for-profit model. This shift has reportedly caused friction among staff members who joined the organization back when its primary mission was the safe and equitable distribution of AI benefits. Brundage’s exit follows the high-profile departures of Ilya Sutskever and Jan Leike, both of whom were instrumental in the company’s alignment and safety initiatives. Their absences have left many industry analysts questioning whether the internal culture is prioritizing speed and product releases over long-term safety precautions.
During his tenure, Brundage was a vocal advocate for transparency regarding the capabilities of large language models. He was instrumental in developing frameworks that help governments and international bodies understand the risks associated with autonomous systems. His departure effectively dissolves the dedicated AGI Readiness team, with its remaining members being redistributed across other departments within the organization. While OpenAI leadership maintains that safety remains a core pillar of their development process, the loss of a dedicated readiness unit suggests a change in how those risks are managed internally.
The timing of this departure is particularly sensitive as OpenAI seeks new rounds of multi-billion dollar funding and expands its partnership with Microsoft. Investors are looking for tangible products and revenue streams, which often necessitates a faster development cycle. However, the global regulatory landscape is also tightening. With the European Union’s AI Act and various executive orders in the United States taking shape, the need for experts who can bridge the gap between technical development and public policy has never been higher.
Brundage has expressed his intention to start a new venture or join an existing think tank where he can publish research without the constraints of corporate trade secrets. He specifically mentioned that the industry needs more data on how AI impacts the labor market and international security. By operating from the outside, he hopes to create a more objective set of benchmarks that can be used to hold all major AI labs accountable, not just his former employer.
As OpenAI moves forward with the development of its next-generation models, the vacancy left by Brundage will be difficult to fill. The organization must now prove to both the public and its own employees that it can balance the immense pressure of the AI arms race with the ethical responsibilities it once championed. The departure of the AGI boss is not just a human resources change; it is a signal of the broader tensions defining the future of silicon valley.