[Question] What other problems would a successful AI safety algorithm solve?

Corporations and governments are in some ways like superintelligences, and in others ways not. Much of economics, political science, and sociology seem to tackle the problem of why institutions fail to align with human interests. Yet the difference in architecture and capabilities between brains and computer programs suggests to me that aligning collective bio-superintelligence is a quite different problem from aligning AI superintelligence. It might be that because we can see and control the building blocks of AI, and have no hard ethical limits on shaping it as we please, aligning AI with human values is an easier problem to solve than aligning human institutions with human values. A technical solution to AI safety might even be a necessary precursor to understanding the brain and human relationships well enough to provide a technical solution to aligning human institutions.

If we had a solid technical solution to AI safety, would it also give us technical solutions to the problem of human collective organization and governance? Would it give us solutions to other age-old problems? If so, is that reason to doubt that a technical solution to AI safety is feasible? If not, is that reason for some optimism? Finally, is our lack of a technical account of what human value is a hindrance to developing safe AI?