RSS

nsage

Karma: 6

I believe the only answer to the question “how would humans much smarter than us solve the alignment problem” is this: they would simply make themselves smarter; if they did build AGI, they would always ensure it was far less intelligent than them.

Hence, the problem is avoided with this maxim: simply always be smarter than the things you build.