Cool. Your definition of AGI seems reasonable. Sounds like we probably disagree about confidence and timelines. (My confidence, I believe, matches Metaculus. [Edit: It doesn’t! I’m embarrassed to have claimed this.])
I agree that we seem not to be on the path of pausing. Is your argument “because pausing is extremely unlikely per se, most of the timelines where we make it to 2050 don’t have a pause”? If one assumes that we won’t pause, I agree that the majority of probability mass for X doesn’t involve a pause, for all X, including making it to 2050.
I generally don’t think it’s a good idea to put a probability on things where you have a significant ability to decide the outcome (i.e. probability of getting divorced), and instead encourage you to believe in pausing.
You are right and I am wrong. Oops. After writing my comment I scrolled up to the top of my post, saw the graph from Manafold (not Metaculus), thought “huh, I forgot the market was so confident” and edited in my parenthetical without thinking. This is even more embarrassing because no market question is actually about the probability conditional on no pause occurring, which is a potentially important factor. I definitely shouldn’t have added that text. Thank you.
(I will point out, as a bit of an aside, that economically transformative AI seems like a different threshold than AGI. My sense is that if an AGI takes a million dollars an hour to run an instance, it’s still an AGI, but it won’t be economically transformative unless it’s substantially superintelligent or becomes much cheaper.
I generally don’t think it’s a good idea to put a probability on things where you have a significant ability to decide the outcome (i.e. probability of getting divorced), and instead encourage you to believe in pausing.
In this case, I can at least talk about the probability of a multi decade pause (with the motivation of delaying AI etc) if I were to be hit by a bus tomorrow. My number is unchanged, around 3%. (Maybe there are some good arguments for higher, I’m not sure.)
I agree that if everyone in my decision-theoretic reference class stopped trying to pause AI (perhaps because of being hit by buses), the chance of a pause is near 0.
Cool. Your definition of AGI seems reasonable. Sounds like we probably disagree about confidence and timelines. (My confidence, I believe, matches Metaculus. [Edit: It doesn’t! I’m embarrassed to have claimed this.])
I agree that we seem not to be on the path of pausing. Is your argument “because pausing is extremely unlikely per se, most of the timelines where we make it to 2050 don’t have a pause”? If one assumes that we won’t pause, I agree that the majority of probability mass for X doesn’t involve a pause, for all X, including making it to 2050.
I generally don’t think it’s a good idea to put a probability on things where you have a significant ability to decide the outcome (i.e. probability of getting divorced), and instead encourage you to believe in pausing.
I don’t think Metaculus is that confident. Some questions:
https://www.metaculus.com/questions/19356/transformative-ai-date/
https://www.metaculus.com/questions/5406/world-output-doubles-in-4-years-by-2050/
https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ (the resolution criteria for this is much weaker than AGI and reasonably likely to trigger much earlier).
Even the last of these has only ~80% by 2050.
You are right and I am wrong. Oops. After writing my comment I scrolled up to the top of my post, saw the graph from Manafold (not Metaculus), thought “huh, I forgot the market was so confident” and edited in my parenthetical without thinking. This is even more embarrassing because no market question is actually about the probability conditional on no pause occurring, which is a potentially important factor. I definitely shouldn’t have added that text. Thank you.
(I will point out, as a bit of an aside, that economically transformative AI seems like a different threshold than AGI. My sense is that if an AGI takes a million dollars an hour to run an instance, it’s still an AGI, but it won’t be economically transformative unless it’s substantially superintelligent or becomes much cheaper.
Still, I take my lumps.)
In this case, I can at least talk about the probability of a multi decade pause (with the motivation of delaying AI etc) if I were to be hit by a bus tomorrow. My number is unchanged, around 3%. (Maybe there are some good arguments for higher, I’m not sure.)
I agree that if everyone in my decision-theoretic reference class stopped trying to pause AI (perhaps because of being hit by buses), the chance of a pause is near 0.