Technical AI Safety
Explore cutting-edge research in interpretability, scalable oversight, and alignment benchmarks. Connect with researchers pushing the boundaries of AI safety.
September 27, 2025 ยท ETH Zurich
Join 200+ researchers, engineers, policymakers, and career-transitioners from Zurich's tech ecosystem to shape the future of responsible AI development.
Deep dive into the areas that matter most to you
Explore cutting-edge research in interpretability, scalable oversight, and alignment benchmarks. Connect with researchers pushing the boundaries of AI safety.
Navigate the policy landscape, regulatory frameworks, and governance mechanisms essential for responsible AI deployment at scale.
Discover success stories, funding pathways, and community health initiatives. Find your path in the growing AI safety ecosystem.
A full day focused on AI safety
Engage with thought leaders across technical research, governance, and field-building tracks. Sessions designed for all expertise levels.
Connect with 10+ leading AI safety organizations eager to meet potential collaborators, researchers, and team members.
Participate in carefully designed activities that match you with peers, mentors, and hiring teams based on your interests.
Join closed-door discussions that spark Swiss-led projects and partnerships, contributing to the next wave of AI safety initiatives.
Where Innovation Meets Tradition
ETH Zurich stands as one of the world's leading universities for technology and science. Home to 21 Nobel Prize winners, including Albert Einstein, ETH combines 170 years of academic excellence with cutting-edge research facilities. The venue perfectly embodies our mission: bridging traditional academic rigor with innovative AI safety research in the heart of Europe's most innovative city.
The perfect hub for AI safety innovation
With powerhouses like Google, Anthropic, Meta, and ETH - and a rapidly expanding AI-safety network - Zurich offers a uniquely dense mix of research, industry, and policy talent. Switzerland's tradition of neutrality and its emerging role in global AI governance make it the ideal hub for building the future of safe, beneficial AI.