So, first let me give you some reasons it was valuable to me, which I think will also be true for other people:
It created space for reconsidering AI safety from the ground up, which is important because I can often become trapped by my plans once they have been set in motion.
It offered an opportunity to learn from and teach others about AI safety, including those who I wouldn’t think would have something to teach me, usually by saying weird things that knocked me out of local maxima created by being relatively immersed in the field, but also by teaching me about things I thought I understood but didn’t really because I hadn’t spent as much time as them specializing in some other small part of the AI safety field. (I’d give examples except it’s been long enough that I can’t remember the specifics.)
It let me connect with folks who I otherwise would not have connected with because they are less active on LW or not living in the Bay Area, and this has generally proven fruitful to me over the years to know other folks in the space in a variety of ways such as increased willingness to consider each others research and give each other the benefit of the doubt on new and weird ideas, access to people who are willing and excited to bounce ideas around with you, and feeling connected to the community of AI safety researchers so this isn’t such a lonely project (this last one being way more important than I think many people recognize!).
It let me quickly get feedback on ideas from multiple people with different specializations and interests that would have otherwise been hard to get if I had to rely on them, say, interacting with my posts on LW or responding to my emails.
In the end though what most motivates me to make such a strong claim is how much more valuable it was than I thought it would be. I expected it to be a nice few days getting to work and think full time about a thing I care greatly about but, due to a variety of life circumstances, find it hard to devote more than ~15 hours a week to, when averaged out over many weeks. Instead it turned out to be a catalyst for getting me to reconsider my research assumptions, to re-examine my plans, to help others and learn I had more to offer others than I thought, and to get me unstuck on problems I’ve been thinking about for months without much measurable progress.
In terms of opportunity costs I would guess that even if you’re already spending the majority of your time working on AI safety and doing so in an in-person collaborative environment with other AI safety researchers, my guess is you still would find it valuable to attend an event like this maybe once a year to help break you out of local maxima created by that bubble and reconsider your research priorities by interacting with a broader range of folks interested in AI safety.