This was my last week at OpenAI. I joined in mid-2021 to help spin up safety policies and processes for the API, and then transitioned to the Policy Research team where I worked closely with Miles Brundage on AI governance, frontier policy issues, and AGI readiness.
Here’s the message I shared on Slack:
Hi all, after almost three and a half years here, I am leaving OpenAI. I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI, and after Miles’s departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally.
During my time here I’ve worked on frontier policy issues like dangerous capability evals, digital sentience, and governing agentic systems, and I’m so glad the company supported the neglected, slightly weird kind of policy research that becomes important when you take seriously the possibility of transformative AI. It breaks my heart a little that I can’t see a place for me to continue doing this kind of work internally.
It’s been a true privilege to work with such exceptional people on world-changing technology at such a pivotal moment. While change is inevitable with growth, I’ve been unsettled by some of the shifts over the last ~year, and the loss of so many people who shaped our culture. I sincerely hope that what made this place so special to me can be strengthened rather than diminished. To that end, at the risk of being presumptuous, I’ll leave you with my 2¢:
Remember the mission is not simply to “build AGI.” There is still so much to do to ensure it benefits humanity.
Take seriously the prospect that our current approach to safety might not be sufficient for the vastly more powerful systems we think could arrive this decade.
Try to say true things even if they are inconvenient or hard to face.
Act with appropriate gravitas: what we’re building is not just another tech product.
I have learned so much here and I’m so grateful to have been on this wild ride with you. I hope you will stay in touch.
What’s next?
First, I’m going to take a break! I will spend some time reading, thinking, and writing, and hopefully publish more here. I also want to talk to lots of people about the state of AI safety and policy and where the gaps are before I commit to my next role. The next few years will be critical, and there are so many exciting things happening in this space at the moment, I want to fully explore possible future directions and consider where I can have the most impact.
The kinds of questions I’m thinking about are:
What are the most important gaps in safety, policy, and governance we need to address to have the best shot of a good transition to AGI?
How can we raise literacy on AI risks and impacts through storytelling and culture?
How far can we get with evals as an approach to AI safety and governance, and where do they fall short?
What should we be doing about potential AI sentience and moral patienthood?
How can we cooperate with AI systems that have “goals” and “preferences” in some relevant sense?
How can we leverage AI to improve our epistemics and sense-making abilities (and counteract the ways in which AI degrades them)?
I’m also interested in mechanism design, public goods, and metascience.
If you have takes on these questions I’d be interested in connecting! You can find me: