Let’s try a simple calculation. What is the expected FAI/UFAI ratio when friendliness is not proven? According to Eliezer’s reply in this thread, it’s close to zero:
your conditionally independent failure probabilities add up to 1 and you’re 100% doomed.
So let’s overestimate it as 1 in a million, as opposed to a more EY-like estimate of 1 in a gazillion
Ignoring the issue of massive overconfidence, why do you even think these concepts are clearly enough defined to assign probability estimates to them like this? It seems pretty clear that they are not. Before discussing the probability of a poorly-defined class of events, it is best to try and say what it is that you are talking about.
Well obviously you can assign probabilities to anything—but if the event is sufficiently vague, doing so in public is rather pointless—since no one else will know what event you are talking about.
I see that others have made the same complaint in this thread—e.g. Richard Loosemore:
before deciding exactly how many angels can dance on the head of a pin, you have to make sure the “angel” concept is meaningful enough that questions about angels are meaningful
Ignoring the issue of massive overconfidence, why do you even think these concepts are clearly enough defined to assign probability estimates to them like this? It seems pretty clear that they are not. Before discussing the probability of a poorly-defined class of events, it is best to try and say what it is that you are talking about.
Feel free to explain why it is not OK to assign probabilities in this case. Clearly EY does not shy away from doing so, as the quote indicates.
Well obviously you can assign probabilities to anything—but if the event is sufficiently vague, doing so in public is rather pointless—since no one else will know what event you are talking about.
I see that others have made the same complaint in this thread—e.g. Richard Loosemore: