I think the idea actually works pretty well with superintelligence (with one big exception if you assume we all die). Lots of people don’t understand how/why superintelligence could kill us all. They naively think that creating a superintelligence would be a great idea. If we all died, then they would understand why alignment is a necessary complexity. The only problem with this is that we are all dead.
I think the idea actually works pretty well with superintelligence (with one big exception if you assume we all die). Lots of people don’t understand how/why superintelligence could kill us all. They naively think that creating a superintelligence would be a great idea. If we all died, then they would understand why alignment is a necessary complexity. The only problem with this is that we are all dead.