AI Safety Dublin
We are a growing grassroots AI community working to reduce risks from advanced AI by raising awareness, upskilling and creating opportunities for a safer AI world of tomorrow. If you are an AI enthusiast, tech observer, or wondering about the future of work and humanity, consider joining our community!
MAILING LISTRegistrations for the AI Safety Fundamentals Intensive course are now open till 25th September 23.59PM and the courses start 30th September. Link
Stay updated by signing for our Mailing List!
Recent Events
Talk: European Leaders converge at Trinity to tackle AI's greatest challenges and opportunities
Dragoş Tudorache MEP, the architect of the groundbreaking AI Act, delivered a keynote talk on ‘The Geopolitics of AI’, which centres on the EU's initiative to set a gold standard for AI regulation amid a global race for AI supremacy.
Our Mission
AI will soon radically transform our society, for better or worse
Experts broadly expect significant progress in AI during our lifetimes, potentially to the point of achieving human-level intelligence. Digital systems with such capabilities would revolutionize every aspect of our society, from business, to politics, to culture. Worryingly, these machines will not be beneficial by default, and public interest is often in tension with the incentives of the many actors developing this technology.
LEARN MORE »We work to ensure AI is developed to benefit humanity's future
In the absence of a dedicated safety effort, AI systems will outpace our ability to explain their behavior, instill our values(biases included) in their objectives, and build robust safeguards against their failures. Our organization empowers students and researchers at to contribute to the field of AI safety.
MAILING LISTGet Involved
Apply for our introductory seminars
If you are new to the field, and are interested in taking a deep dive into AI Safety, consider joining our 8-week reading and discussion groups diving into the field. In our Intro to AI Safety Governance, in addition to basics of AI Safety field, you will learn about the existing and potential ways of steering AI Safety policy on micro and macro levels. In the AI Safety Alignment program we aim to give you an overview of AI alignment — the research field aiming to align advanced AI systems with human intentions. Applications for fall period are due on 21st July.
Expression of InterestResearch
Interested in doing AI alignment research? Reach out to the organizers and we can help you find a mentor.
CONTACT USJobs in AI Safety
Check out various AI Safety positions at various organizations. You might have heard of some of the bigger ones like Anthropic and OpenAI.
AI SAFETY POSITIONS »Take part in worldwide contests
48 hours of intense, fun and collaborative research on the most interesting questions of our day from machine learning & AI safety!
ALIGNMENT JAM SESSIONS»How do we make an AI that does not misgeneralize our goals? How do we overcome the seemingly-natural desire for survival by making an advanced AI that lets us shut it off?
AI ALIGNMENT AWARDS»