I know many of you folks care a lot about how AI goes. I’m curious how you connect that with – or actively disconnect that from – the new workshops.
The question I’m most interested in: do you have a set of values you intend the workshops to do well by, that don’t involve AI, and that you don’t intend to let AI pre-empt?[1][2]
I’m also interested in any thinking you have about how the workshops support the role of x-risk, but if I could pick one question, it’d be the former.
I know many of you folks care a lot about how AI goes. I’m curious how you connect that with – or actively disconnect that from – the new workshops.
The question I’m most interested in: do you have a set of values you intend the workshops to do well by, that don’t involve AI, and that you don’t intend to let AI pre-empt?[1][2]
I’m also interested in any thinking you have about how the workshops support the role of x-risk, but if I could pick one question, it’d be the former.
At least in a given workshop. Perhaps you’d stop doing the workshops overall if your thoughts about AI changed
Barring edge cases, like someone trying to build an AGI in the basement or whatever