The real bone of contention here seems to be the long chain of inference leading from common scientific/philosophical knowledge to the conclusion that FAI is a serious existential risk.
I am assuming you meant uFAI or AGI instead of FAI.
The FAI problem, on the other hand, is at the top of a large house of inferential cards, so that Eliezer is saving the world GIVEN that W, X, Y and Z are true.
For my part the conclusion you mention seems to be the easy part. I consider that an answered question. The ‘Eliezer is saving the world’ part is far more difficult for me to answer due to the social and political intricacies that must be accounted for.
I am assuming you meant uFAI or AGI instead of FAI.
For my part the conclusion you mention seems to be the easy part. I consider that an answered question. The ‘Eliezer is saving the world’ part is far more difficult for me to answer due to the social and political intricacies that must be accounted for.
Don’t forget that some people, e.g. Roko, also think that FAI is a serious existential risk as well as uFAI.