RSS

nsage

Karma: 49

I believe the only answer to the question “how should humans solve the alignment problem” is this: we should simply make ourselves smarter first; if we do build AGI, we should always ensure it is far less intelligent than us.

Hence, the problem is avoided with this maxim: simply always be smarter than the things you build.

Have fron­tier AI sys­tems sur­passed the self-repli­cat­ing red line?

nsage11 Jan 2025 5:31 UTC
4 points
0 comments4 min readLW link