Obvious things I’d do different if I was going to do this again are to have a longer Request for Comments period, to be more careful with what answers allowed write-ins, and to post about this at least once on other sites. Oh, and if I knew I was running it I’d do it in December the year of, not February the year after.
Some people said thanks for reviving the census in the Further Comments field. Even with the low responses I felt like this was fun and useful for me. I can also see a lot of ways to make this easier to run the second time, from having some tools set up now to print the mean and distribution to being able to copy and edit the google form instead of writing my own up while looking at the old one in another screen. All of that means I’m inclined to try again next year.
When writing this survey, I leaned towards making it very familiar and similar to past surveys so I didn’t include some questions I otherwise would have. There’s a lot of good questions I’d add now that I’ve done the basics once.
People sometimes characterize differences between ideologies in rat-adj spaces. For instance there’s Scott Aaronson’s reform vs orthodox AI risk distinction.
Those ideological differences tend to mix together multiple distinct beliefs, for instance Scott Aaronson says that reform AI risk thinkers both tend to believe “that trying to get a broad swath of the public on board with one’s preferred AI policy is something close to a deontological imperative” and “that research on actually-existing systems as one of the only ways to get feedback from the world about which AI safety ideas are or aren’t promising”.
Mixing together multiple distinct beliefs into a single axis isn’t necessarily unreasonable if they are correlated. But it would be interesting to me to ask about a bunch of specific beliefs so that the correlations can be mapped out using standard methods such as PCA/factor analysis.
This makes sense. There are parts of what Scott Aaronson describes as “reform thinkers” that are obviously or probably correct, and also some elements that are disturbingly misguided. The bits and pieces of the axis are what’s valuable, not the axis itself.
Future survey discussion thread!
Obvious things I’d do different if I was going to do this again are to have a longer Request for Comments period, to be more careful with what answers allowed write-ins, and to post about this at least once on other sites. Oh, and if I knew I was running it I’d do it in December the year of, not February the year after.
Some people said thanks for reviving the census in the Further Comments field. Even with the low responses I felt like this was fun and useful for me. I can also see a lot of ways to make this easier to run the second time, from having some tools set up now to print the mean and distribution to being able to copy and edit the google form instead of writing my own up while looking at the old one in another screen. All of that means I’m inclined to try again next year.
When writing this survey, I leaned towards making it very familiar and similar to past surveys so I didn’t include some questions I otherwise would have. There’s a lot of good questions I’d add now that I’ve done the basics once.
My biggest preference for future surveys:
People sometimes characterize differences between ideologies in rat-adj spaces. For instance there’s Scott Aaronson’s reform vs orthodox AI risk distinction.
Those ideological differences tend to mix together multiple distinct beliefs, for instance Scott Aaronson says that reform AI risk thinkers both tend to believe “that trying to get a broad swath of the public on board with one’s preferred AI policy is something close to a deontological imperative” and “that research on actually-existing systems as one of the only ways to get feedback from the world about which AI safety ideas are or aren’t promising”.
Mixing together multiple distinct beliefs into a single axis isn’t necessarily unreasonable if they are correlated. But it would be interesting to me to ask about a bunch of specific beliefs so that the correlations can be mapped out using standard methods such as PCA/factor analysis.
Be advised, the request for comments is up for the 2023 version.
This makes sense. There are parts of what Scott Aaronson describes as “reform thinkers” that are obviously or probably correct, and also some elements that are disturbingly misguided. The bits and pieces of the axis are what’s valuable, not the axis itself.