The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
If that’s the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren’t convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that’s a good thing.
If that’s the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren’t convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that’s a good thing.