Well, to use dramatic language, if you thought that everying that that was good and fine about humanity—about existence—had a reasonable chance of getting erased forever, wouldn’t that make you sad?
Not yet. I’m just afraid for the moment. And certainly not because of AI. AI is an existential risk, but it also have potentially great benefits as well. Among them is the reduction of other existential risks. So, I’m not sure AI is a net increase in the existential risks. Yet.
Of course, whatever is a net increase in existential risks might make me sad. But even then, I tend to be sad when I lost something, not when I think there’s still a chance.
AI makes me very very afraid, and sad.
I understand “afraid” (let be an Unfriendly AI go Foom, and poof, we’re all dead, or worse). But I don’t get “sad”. Could you elaborate a bit?
Well, to use dramatic language, if you thought that everying that that was good and fine about humanity—about existence—had a reasonable chance of getting erased forever, wouldn’t that make you sad?
Not yet. I’m just afraid for the moment. And certainly not because of AI. AI is an existential risk, but it also have potentially great benefits as well. Among them is the reduction of other existential risks. So, I’m not sure AI is a net increase in the existential risks. Yet.
Of course, whatever is a net increase in existential risks might make me sad. But even then, I tend to be sad when I lost something, not when I think there’s still a chance.
It makes me sad because it means smart people aren’t doing things that are actually useful.