Yeah, agree with that, though given that I think many good futures route through substantial pauses, or substantial improvements in human coordination technology, mapping out the degree to which AI systems can uplift people before it is capable of disempowering them is a pretty crucial thing to map, so I don’t super agree with this equivocation.
Yeah, agree with that, though given that I think many good futures route through substantial pauses, or substantial improvements in human coordination technology, mapping out the degree to which AI systems can uplift people before it is capable of disempowering them is a pretty crucial thing to map, so I don’t super agree with this equivocation.