Gott’s equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.
If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer—but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.
This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.
He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research—which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.
TL;DR: anthropic + exponential growth = AGI to 2030.
Could you detail a bit more the Gott’s equation? I’m not familiar with it.
Also, do you think that those 62 years are meaningful if we think about AI winters or exponential technological progress?
PS: I think you commented instead of giving an answer (different things in question posts)
Gott’s equation could be found in wiki and the main idea is that if I am randomly observing some external process, its age could be used to estimate its future time of existence, as, most likely, I observe it somewhere in the middle of its existence. Gott himself used this logic to predict the fall of Berlin wall when he was a student, and it actually failed in predicted timing, when Gott was already a prominent scientists and his article about it was published in Nature.
If we account for the exponential growth in AI, and assume that I am randomly taken of all AI researchers, the end will be much nearer—but all it becomes more speculative, as accounting for AI winters will dilute the prediction etc.
This anthropic evidence gives you a likelihood function. If you want a probability distribution, you additionally need a prior probability distribution.
He we use an assumption that probability of AI creation is distributed linearly along the interval of AI research—which is obviously false, as it should grow to the end, may be exponentially. If we assume that the field is doubling, say, every 5 years, Copernican reasoning tells us that if we randomly selected from the members of this field, the field will end in after the next doubling with something like 50 per cent probability, and 75 per cent after 2 doublings.
TL;DR: anthropic + exponential growth = AGI to 2030.