maybe some type of oppositional game can help in this regard?
Along the same lines as the AI Box experiment. We have one group “trying to be the worst case AI” starting right at this moment. Not a hypothetical “worst case” but one taken from this moment in time, as if you were an engineer trying to facilitate the worst AI possible.
The Worst Casers propose one “step” forward in engineering. Then we have some sort of Reality Checking team (maybe just a general crowd vote?), where they rate to disprove the feasbility of the step, given the conditions that exist in the scenario so far. Anyone else can subit a “worse-Worst Case” if it is easier / faster / larger magnitude than the standing one.
Over time the goal is to crowd source the shortest credible path to the worst possible outcome, which if done very well, migth actually reach the realm of colloquial communicability.
I’ve started coding editable logic trees like this as web apps before, so if that makes any sense I could make it public while I work on it.
Another possibility is to get Steven Spielberg to make a movie but force him to have Yud as the script writer.
This is nice to read, because it seems Sam is more often on the defensive in public recently and comes across sounding more “accel” than I’m comfortable with. In this video from 6 years ago, various figures like Hassabis and Bostrom (Sam is not there) propose on several occasions exactly what’s happening now—a period of rapid development, perhaps to provoke people into action / regulation while the stakes are somewhat lower, which makes me think this may have been in part what Sam was thinking all along too.
https://www.youtube.com/watch?v=h0962biiZa4