I think your logic slipped. I’m not saying creating AI wouldn’t be different. I’m saying that the generator of doom predictions keeps predicting doom, and then hedging, and then re-predicting, while insisting that its new prediction is “really actually different this time”. Just like the last twenty times. But no, really, this time is really actually truly different!
If you’re talking about a real doom, such as building a superior species, and if it’s actually going to happen at some point, then yes at some point those doom predictions will in fact pan out. But if the generator of doom predictions is optimizing for feeling and expressing doom more than making accurate predictions, then it’s close to accidental that it ends up right at some point.
I don’t think it’s quite that extreme here. But saying “No, creating AI really would be different!” doesn’t affect the reasoning whatsoever. That just makes it a potent source of viral doom memes.
Although I don’t like comments starting with “your logic slipped” because it sounds passive-aggressive “you are stupid” vibes I will reply.
So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today.
This is actually and very blatantly a self preserving mechanism that is called “norlmancy bias” very well documented for human species.
Although I don’t like comments starting with “your logic slipped” because it sounds passive-aggressive “you are stupid” vibes I will reply.
Sorry, that’s not how I meant it. I meant it more like “Oh, I think your foot slipped there, so if you take another step I think it won’t have the effect you’re looking for.” We can all slip up. It’s intended as a friendly note.
I agree that on rereading it it didn’t come across that way.
So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today.
Uh, no. That’s not what I’m saying.
I’m saying something more like: if it turns out that doomerism is once again exaggerated, perhaps we should take a step back and ask what’s creating the exaggeration instead of plowing ahead as we have been.
how can you know if it’s exaggerated? It’s like an earthquake. The fact that it didn’t happen yet doesn’t mean that it will not be destructive if it happens through time. The superintelligence slope doesn’t stop somewhere to evaluate nor do we have any kind of signal that the more time passes the more improbable it is.
I think your logic slipped. I’m not saying creating AI wouldn’t be different. I’m saying that the generator of doom predictions keeps predicting doom, and then hedging, and then re-predicting, while insisting that its new prediction is “really actually different this time”. Just like the last twenty times. But no, really, this time is really actually truly different!
If you’re talking about a real doom, such as building a superior species, and if it’s actually going to happen at some point, then yes at some point those doom predictions will in fact pan out. But if the generator of doom predictions is optimizing for feeling and expressing doom more than making accurate predictions, then it’s close to accidental that it ends up right at some point.
I don’t think it’s quite that extreme here. But saying “No, creating AI really would be different!” doesn’t affect the reasoning whatsoever. That just makes it a potent source of viral doom memes.
Although I don’t like comments starting with “your logic slipped” because it sounds passive-aggressive “you are stupid” vibes I will reply.
So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today.
This is actually and very blatantly a self preserving mechanism that is called “norlmancy bias” very well documented for human species.
Sorry, that’s not how I meant it. I meant it more like “Oh, I think your foot slipped there, so if you take another step I think it won’t have the effect you’re looking for.” We can all slip up. It’s intended as a friendly note.
I agree that on rereading it it didn’t come across that way.
Uh, no. That’s not what I’m saying.
I’m saying something more like: if it turns out that doomerism is once again exaggerated, perhaps we should take a step back and ask what’s creating the exaggeration instead of plowing ahead as we have been.
how can you know if it’s exaggerated? It’s like an earthquake. The fact that it didn’t happen yet doesn’t mean that it will not be destructive if it happens through time. The superintelligence slope doesn’t stop somewhere to evaluate nor do we have any kind of signal that the more time passes the more improbable it is.