Mini-Symposium on Accelerating AI Safety Progress via Technical Methods—Hybrid In-Person and Virtual

  • Contact: mar­tin.leit­gab@gmail.com

Are you working on accelerating AI safety effectiveness for existential risks? Interested in contributing to this problem, learning about current efforts, or funding active work? Join us:

📍 Location & Time (Hybrid Event):

  • In-person: Picasso Boardroom, 1185 6th Avenue, NYC (capacity limited to 27 attendees)

  • Virtual: Unlimited capacity, Google Meet link will be sent to registered participants before event

  • Date: Friday 1010 at 4 pm EDT

    • (One hour before EA Global NYC 2025 opens nearby)

🎯 The Challenge:

  • ​AI capabilities are advancing rapidly

  • ​Current research literature: Many AI safety approaches may not scale beyond human-level AI

  • Critical question: Given catastrophic risks at stake, how can we accelerate our progress toward effective technical AI safety solutions, for future powerful AI systems which may emerge in the near-term?

🚀 Event Focus: This symposium may be a first to connect researchers, founders, funders, and forward thinkers on technical AI safety acceleration methods.

📝 Registration:

  • Free hybrid event with in-person and virtual options

  • In-person registration: Capacity limited to 27 attendees- register early!

    • Registration deadline: Thursday 109 at 8 am EDT

  • Virtual registration: Open until event start

🎤 Lightning Talks about your work or interest in the field:

  • Format: 7 minutes followed by 5 minutes Q&A

  • To present: Email martin.leitgab@gmail.com with brief description

  • Speaker list: Selected speakers visible in public sheet [here]

  • Post-event: Summary with speaker materials posted on LessWrong (with speaker permission)

💡 Topics of Interest:

  • ​Accelerating discovery of effective safety solutions

  • ​Scalable effectiveness predictions for solution candidates

  • ​Automating safety research workflow steps

  • ​Any technical methods to accelerate towards AI safety effectiveness for AI beyond human-level

​We are looking forward to your participation!

No comments.