Apply for our introductory seminars
If you are new to the field, and are interested in taking a deep dive into AI Safety, consider joining our 8-week reading and discussion groups diving into the field. In our Intro to AI Safety Governance, in addition to basics of AI Safety field, you will learn about the existing and potential ways of steering AI Safety policy on micro and macro levels. In the AI Safety Alignment program we aim to give you an overview of AI alignment — the research field aiming to align advanced AI systems with human intentions. Applications for fall period are due on 21st July.
Expression of InterestInterested in doing AI alignment research? Reach out to the organizers and we can help you find a mentor.
CONTACT US Check out various AI Safety positions at various organizations. You might have heard of some of the bigger ones like Anthropic and OpenAI.
AI SAFETY POSITIONS »Take part in worldwide contests
48 hours of intense, fun and collaborative research on the most interesting questions of our day from machine learning & AI safety!
ALIGNMENT JAM SESSIONS» How do we make an AI that does not misgeneralize our goals? How do we overcome the seemingly-natural desire for survival by making an advanced AI that lets us shut it off?
AI ALIGNMENT AWARDS»