FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
I find it unlikely that you are well calibrated when you put your credence at 99% for a 1,000 year forecast.
Human culture changes over time. It’s very difficult to predict how humans in the future will think about specific problems. We went in less than 100 years from criminalizing homosexual acts to lawful same sex marriage.
Could you imagine that everyone would adopt your morality in 200 or 300 hundred years? If so do you think that would prevent humanity from being doomed?
If you don’t think so, I would suggest you to evaluate your own moral beliefs in detail.
I find it unlikely that you are well calibrated when you put your credence at 99% for a 1,000 year forecast.
Human culture changes over time. It’s very difficult to predict how humans in the future will think about specific problems. We went in less than 100 years from criminalizing homosexual acts to lawful same sex marriage.
Could you imagine that everyone would adopt your morality in 200 or 300 hundred years? If so do you think that would prevent humanity from being doomed?
If you don’t think so, I would suggest you to evaluate your own moral beliefs in detail.