We will be around in 30 years

This post is going to be downvoted to oblivion, I wish it weren’t or that the two axis vote could be used here. In any case, I prefer to be coherent with my values and state what I think is true even if that means being perceived as an outcast.

I’m becoming more and more skeptical about AGI meaning doom. After reading EY’s fantastic post, I am shifting my probabilities towards, this line of reasoning is wrong and many clever people are falling into very obvious mistakes. Some of them due to the fact that in this specific group believing in doom and having short timelines is well regarded and considered a sign of intelligence. For example, many people are taking pride at “being able to make a ton of correct inferences” before whatever they predict is proven true. This is worrying.

I am posting this for two reasons. One, I would like to come back periodically to this post and use it as a reminder that we are still here. Two, there might be many people out there that share a similar opinion and they are too shy to speak up. I do love LW and the community here, and if I think it is going astray for some reason it makes sense for me to say that loud and clear.

My reason to be skeptical is really easy: I think we are overestimating how likely is that an AGI can come up with feasible scenarios to kill all humans. All scenarios that I see discussed are:

  1. AGI makes nanobots/​biotechnology and kills everyone. I am yet to see a believable description of how this takes place

  2. We don’t know the specifics, but an AGI can come up with plans that you can’t and that’s enough. That is technically true but also a cheap argument that can be used for almost anything

It is being taken for granted that an AGI will be automatically almighty and capable of taking over in a matter of hours/​days. Then, everything is built on top of that assumption, which is simply infalsifiable, because the you can’t know what an AGI would do is always there.

To be clear, I am not saying that:

  • Instrumental convergence and the orthogonality are not valid

  • AGI won’t be developed soon (I think it is obvious that they will)

  • AGI won’t be powerful (I think they will be extremely powerful)

  • AGI won’t be potentially dangerous: I think they will, and they might kill important numbers of people, they will probably be used as weapons

  • AGI safety is not important, I think it is super important and I am glad people are working on this. However, I also think that fighting global warming is important but I don’t think it it will cause the extinction of the human race, nor that we benefit in any meaningful way from telling people that it will

What I think is wrong is:

In the next 10-20 years there will be a single AGI that would kill all humans extremely quickly before we can even respond to that.

If you think this is a simplistic or distorted version of what EY is saying, you are not paying attention. If you think that EY is merely saying that an AGI can kill a big fraction of humans in accident and so on but there will be survivors, you are not paying attention.