I am not at all sure that these lessons are transferable to cryo or AI risk advocacy.
I felt that the main transferable lesson was the broader point about a change in habits requiring a change in the overall culture. Sometimes you can do it with friendly door-to-door education, but sometimes it requires a broader shift, as with the adoption of antisepsis. That seems like rough evidence of MIRI’s and CFAR’s efforts at building cultures of thinking about these things in a new manner being a strategy worth pursuing. This article caused me to assign a considerably greater probability to the possibility of CFAR having a major effect than I’d done before.
Also some obvious parallels in that e.g. taking steps to increase AI safety doesn’t really provide emotional benefits to current AI researchers, nor does the thought of cryonics provide emotional benefits to most of the people considering signing up, though those points might be relatively well-understood here already.
I felt that the main transferable lesson was the broader point about a change in habits requiring a change in the overall culture. Sometimes you can do it with friendly door-to-door education, but sometimes it requires a broader shift, as with the adoption of antisepsis. That seems like rough evidence of MIRI’s and CFAR’s efforts at building cultures of thinking about these things in a new manner being a strategy worth pursuing. This article caused me to assign a considerably greater probability to the possibility of CFAR having a major effect than I’d done before.
Also some obvious parallels in that e.g. taking steps to increase AI safety doesn’t really provide emotional benefits to current AI researchers, nor does the thought of cryonics provide emotional benefits to most of the people considering signing up, though those points might be relatively well-understood here already.