Most of that comes from me sharing the same so-called pessimistic (I would say realistic) expectations as some LWers (e.g. Yudkowsky’s AGI Ruin: A List of Lethalities) that the default outcome of AI progress is unaligned AGI → unaligned ASI → extinction, that we’re fully on track for that scenario, and that it’s very hard to imagine how we’d get off that track.
Ok, but I don’t read see those LWers also saying >99%, so what do you know that they don’t which allows you to justifiably hold that kind of confidence?
That’s a disbelief in superintelligence.
For what it’s worth, after rereading my own comment I can see how you might think that. With that said, I do think super intelligence is overwhelmingly likely to be a thing.
Ok, but I don’t read see those LWers also saying >99%, so what do you know that they don’t which allows you to justifiably hold that kind of confidence?
For what it’s worth, after rereading my own comment I can see how you might think that. With that said, I do think super intelligence is overwhelmingly likely to be a thing.