I found this an extremely surprising result. Geoff Anders claims immediate effects from essentially only two interventions:
But you see, a plan can’t be very good if it can be thwarted by some mild fluctuations in the weather. Let’s say there’s a thunderstorm and the power goes out. Well, then the AGI will turn off. And if it turns off, it won’t be able to accomplish its goal of becoming the best possible chess player. You see, if we humans executed your plans, we would all die of starvation. We would study the rules of chess, we’d calculate chess moves and then we’d die...
and
But what if humans don’t want to install a backup generator? … Alright, you have made some progress. You’ve solved the power source problem. But in doing so you replaced it with another problem: the human compliance problem.
There were more interventions between these and the surveys of average belief, but these interventions caused at least a few students to generate the idea that AGIs are much more creative and powerful than in Terminator 2. The effect on the tail seems to me more important and surprising than the effect on the mean.
There were a lot of interventions before these two, including whatever idiosyncrasies Anders’s philosophy course had, but the outcome before these two interventions seemed pretty standard. The first AI day seemed pretty standard. The chess exercise is probably not common and the two quotes above require its context, but the initial reaction to the chess exercise did not surprise me.
I found this an extremely surprising result. Geoff Anders claims immediate effects from essentially only two interventions:
and
There were more interventions between these and the surveys of average belief, but these interventions caused at least a few students to generate the idea that AGIs are much more creative and powerful than in Terminator 2. The effect on the tail seems to me more important and surprising than the effect on the mean.
There were a lot of interventions before these two, including whatever idiosyncrasies Anders’s philosophy course had, but the outcome before these two interventions seemed pretty standard. The first AI day seemed pretty standard. The chess exercise is probably not common and the two quotes above require its context, but the initial reaction to the chess exercise did not surprise me.