The way to estimate probabilities like that is to break them into pieces. This one divides naturally into two pieces: the probability that an AGI will be created in the not-too-distant future, and the probability that Eliezer will play a critical role if it is. For the former, I estimate a probability of 0.8; but it’s a complex and controversial enough topic that I would accept any probability as low as 10^-2 as, if not actually correct, at least not a grievous error. Any probability smaller than 10^-2 would be evidence of severe overconfidence.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.
We have to assign probabilities to Artificial intelligence being first created on earth over the earths entire lifetime.
So what probability should we give to the first non-biological intelligence being created in the time period between 3 million years and 3million and 50 years time (not necessarily by humans)? Would it be greater than or less than 10^-2? If less than that, what justifies your confidence in that statement rather than your confidence that it will be created soon?
We have to get all these probabilities to sum to the chance we assign to AI ever being created, over the lifetime of the earth. So I don’t see how we can avoid very small probabilities in AI being created at certain times.