I believe the only answer to the question “how should humans solve the alignment problem” is this: we should simply make ourselves smarter first; if we do build AGI, we should always ensure it is far less intelligent than us.
Hence, the problem is avoided with this maxim: simply always be smarter than the things you build.