For what it’s worth the uncertain future application gives me 99% chance of a singularity before 2070 if I recall correctly. The mean of my distrubution is 2028.
I really wish more SIAI members talked to each other about this! Estimates vary wildly, and I’m never sure if people are giving estimates taking into account their decision theory or not (that is, thinking ‘We couldn’t prevent a negative singularity if it was to occur in the next 10 years, so let’s discount those worlds and exclude them from our probability estimates’.) I’m also not sure if people are giving far-off estimates because they don’t want to think about the implications otherwise, or because they tried to build an FAI and it didn’t work, or because they want to signal sophistication and sophisticated people don’t predict crazy things happening very soon, or because they are taking an outside view of the problem, or because they’ve read the recent publications at the AGI conferences and various journals, thought about advances that need to be made, estimated the rate of progress, and determined a date using the inside view (like Steve Rayhawk who gives a shorter time estimate than anyone else, or Shane Legg who I’ve heard also gives a short estimate but I am not sure about that, or Ben Goertzel who I am again not entirely sure about, or Juergen Schmidhuber who seems to be predicting it soonish, or Eliezer who used to have a soonish estimate with very wide tails but I have no idea what his thoughts are now). I’ve heard the guys at FHI also have distant estimates, and a lot of narrow AI people predict far-off AGI as well. Where are the ‘singularity is far’ people getting their predictions?
The problem with the uncertain future is that it is a model of reality which allows you to play with the parameters of the model, but not the structure. For example, it has no option for “model uncertainty”, e.g. the possibility that the assumptions it makes about forms of probability distributions are incorrect. And a lot of these assumptions were made for the sake of tractability rather than realism. I think that the best way to use it is as an intuition pump for your own model, which you could make in excel or in your head.
Giving probabilities of 99% is a classic symptom of not having any model uncertainty.
Giving probabilities of 99% is a classic symptom of not having any model uncertainty.
If Nick and I write some more posts I think this would be the theme. Structural uncertainty is hard to think around.
Anyway, I got my singularity estimations by listening to lots of people working at SIAI and seeing whose points I found compelling. When I arrived at Benton I was thinking something like 2055. It’s a little unsettling that the more arguments I hear from both sides the nearer in the future my predictions are. I think my estimates are probably too biased towards Steve Rayhawk’s, but this is because everyone else’s estimates seem to take the form of outside view considerations that I find weak.
98% certain that the singularity will happen before you die (which could easily be 2070)? This seems like an unjustifiably high level of confidence.
For what it’s worth the uncertain future application gives me 99% chance of a singularity before 2070 if I recall correctly. The mean of my distrubution is 2028.
I really wish more SIAI members talked to each other about this! Estimates vary wildly, and I’m never sure if people are giving estimates taking into account their decision theory or not (that is, thinking ‘We couldn’t prevent a negative singularity if it was to occur in the next 10 years, so let’s discount those worlds and exclude them from our probability estimates’.) I’m also not sure if people are giving far-off estimates because they don’t want to think about the implications otherwise, or because they tried to build an FAI and it didn’t work, or because they want to signal sophistication and sophisticated people don’t predict crazy things happening very soon, or because they are taking an outside view of the problem, or because they’ve read the recent publications at the AGI conferences and various journals, thought about advances that need to be made, estimated the rate of progress, and determined a date using the inside view (like Steve Rayhawk who gives a shorter time estimate than anyone else, or Shane Legg who I’ve heard also gives a short estimate but I am not sure about that, or Ben Goertzel who I am again not entirely sure about, or Juergen Schmidhuber who seems to be predicting it soonish, or Eliezer who used to have a soonish estimate with very wide tails but I have no idea what his thoughts are now). I’ve heard the guys at FHI also have distant estimates, and a lot of narrow AI people predict far-off AGI as well. Where are the ‘singularity is far’ people getting their predictions?
UF is not accurate!
True. But the mean of my distribution is still 2028 regardless of the inaccuracy of UF.
The problem with the uncertain future is that it is a model of reality which allows you to play with the parameters of the model, but not the structure. For example, it has no option for “model uncertainty”, e.g. the possibility that the assumptions it makes about forms of probability distributions are incorrect. And a lot of these assumptions were made for the sake of tractability rather than realism. I think that the best way to use it is as an intuition pump for your own model, which you could make in excel or in your head.
Giving probabilities of 99% is a classic symptom of not having any model uncertainty.
If Nick and I write some more posts I think this would be the theme. Structural uncertainty is hard to think around.
Anyway, I got my singularity estimations by listening to lots of people working at SIAI and seeing whose points I found compelling. When I arrived at Benton I was thinking something like 2055. It’s a little unsettling that the more arguments I hear from both sides the nearer in the future my predictions are. I think my estimates are probably too biased towards Steve Rayhawk’s, but this is because everyone else’s estimates seem to take the form of outside view considerations that I find weak.