Do the restraints that have so far prevented governments/corporations from paperclipping the world map onto any proposed strategies for AI alignment?
I think the main restraint here is time. Specifically, self-enhancing of governments and corporations is very slow and unreliable.
And this planet has already been partially “paperclipped”. The environment is destroyed, corporations and governments oppress people in many places.
From a pessimistic perspective, the only reason democracy works is that satisfied and educated humans are economically more productive, so you can extract more resources from them if you keep them happy. With invention of human-level AI, this restraint will be gone.
From a pessimistic perspective, the only reason democracy works is that satisfied and educated humans are economically more productive, so you can extract more resources from them if you keep them happy.
I find this an optimistic perspective. If Moloch is aligned with satisfaction and education, the win is stable.
With invention of human-level AI, this restraint will be gone.
Perhaps. A lot depends on exact values and whether it remains true that overall productivity depends on satisfied and educated humans. And also on whether human-level AI are morally-relevant entities and whether their satisfaction increases productivity. The term “productivity” gets weird in many singularity visions, but stays somewhat sane in others.
I think the main restraint here is time. Specifically, self-enhancing of governments and corporations is very slow and unreliable.
And this planet has already been partially “paperclipped”. The environment is destroyed, corporations and governments oppress people in many places.
From a pessimistic perspective, the only reason democracy works is that satisfied and educated humans are economically more productive, so you can extract more resources from them if you keep them happy. With invention of human-level AI, this restraint will be gone.
I find this an optimistic perspective. If Moloch is aligned with satisfaction and education, the win is stable.
Perhaps. A lot depends on exact values and whether it remains true that overall productivity depends on satisfied and educated humans. And also on whether human-level AI are morally-relevant entities and whether their satisfaction increases productivity. The term “productivity” gets weird in many singularity visions, but stays somewhat sane in others.