Sorry, maybe I’m dense. How is this FAI relevant?
“It takes longer to develop value preserving AI technologies than to develop stuff that’s cool but dangerous (“more fun than survival”)”
Via the notion of a Great Filter, through existential risk generally. It’s a bit of a stretch, for sure, but the link is there.
The Great Filter aspect is explicit. But that seems extremely tenuous. Rationalists should worry about the Great Filter whether or not it has anything to do with FAI.
Agreed. I was just saying that there is a link, and it’s even reasonably salient within the context of this site. I make no claim that it is the most appropriate link to draw—and, indeed, would have recommended a different title.