My focus would have wanted to be purely on AGI. I guess, the addition of AGI x-risks was sort of trying to hint at the fact that they would be coming together with it (that is why I mentioned Lecunn).
I feel like there is this strong belief that AGI is just around the corner (and it might very well be), but I wanted to know what is the opposition against such statement. I know that there is a lot of solid proof that we are going towards more intelligent systems, but understanding the gaps in this “given prediction” might provide useful information (either for updating timelines, changing research focus etc.).
Personally, I might be on the “in-between” position, where I am not sure what to believe (in terms of timelines). I am safety inclined, and I applaud the effort of people in the field, but there might be a blind spot in believing that AGI is coming soon (when the reality might be very much different). What if that is not the case? What then? What are the safety research implications? More importantly, what are the implications around the field of AI? Companies and researchers might very well use the hype wave to keep getting financed, get recognition etc.
Perhaps, an analogy would help. Think about cancer. Everyone knows it is true, and that is something that is not going to be argued about (hopefully). Now, I cannot come in and say what are the arguments in support of the existence of cancer, because it is already there and proven to be there. Now, in the context of AGI, I feel like there might be a lot of speculations and a lot of people trying to claim that they knew the perfect day AGI came. It feels sort of like a distraction to me. Even the posts around “getting your things in order”. It feels sort of wrong to just give up on everything without even considering the arguments against the truth you believe in.
I see—you’re mostly asking about timelines, you’re not asking whether AGI is possible ever, but whether it will happen soon. You should look at the recent post
I did think about writing a similar post to the one I linked asking what are the strongest arguments for longer timelines. I expect those to be at least as weak.
I think the correct summary is that nobody knows. I think the wise move would be to prepare for short timelines since we don’t know.
My focus would have wanted to be purely on AGI. I guess, the addition of AGI x-risks was sort of trying to hint at the fact that they would be coming together with it (that is why I mentioned Lecunn).
I feel like there is this strong belief that AGI is just around the corner (and it might very well be), but I wanted to know what is the opposition against such statement. I know that there is a lot of solid proof that we are going towards more intelligent systems, but understanding the gaps in this “given prediction” might provide useful information (either for updating timelines, changing research focus etc.).
Personally, I might be on the “in-between” position, where I am not sure what to believe (in terms of timelines). I am safety inclined, and I applaud the effort of people in the field, but there might be a blind spot in believing that AGI is coming soon (when the reality might be very much different). What if that is not the case? What then? What are the safety research implications? More importantly, what are the implications around the field of AI? Companies and researchers might very well use the hype wave to keep getting financed, get recognition etc.
Perhaps, an analogy would help. Think about cancer. Everyone knows it is true, and that is something that is not going to be argued about (hopefully). Now, I cannot come in and say what are the arguments in support of the existence of cancer, because it is already there and proven to be there. Now, in the context of AGI, I feel like there might be a lot of speculations and a lot of people trying to claim that they knew the perfect day AGI came. It feels sort of like a distraction to me. Even the posts around “getting your things in order”. It feels sort of wrong to just give up on everything without even considering the arguments against the truth you believe in.
I see—you’re mostly asking about timelines, you’re not asking whether AGI is possible ever, but whether it will happen soon. You should look at the recent post
https://www.lesswrong.com/posts/oC4wv4nTrs2yrP5hz/what-are-the-strongest-arguments-for-very-short-timelines
I think timelines is the keyword to search for.
I did think about writing a similar post to the one I linked asking what are the strongest arguments for longer timelines. I expect those to be at least as weak.
I think the correct summary is that nobody knows. I think the wise move would be to prepare for short timelines since we don’t know.