Yeah. To clear, I didn’t intend for my comment to make it sound like I think stuff is easy if we have solved alignment. It might be difficult enough that pausing AI is required to solve it (a position I’m sympathetic to anyways).
I just meant to communicate that if we solve alignment, the remaining problem is more like a very high-stakes version of getting the person you want elected president. It’s a very difficult task, but not a problem where the difficulty lies in conceptual confusion, or theoretical questions we don’t have answers to. But discussions about these post-asi topics usually treat it like that.
Yeah. To clear, I didn’t intend for my comment to make it sound like I think stuff is easy if we have solved alignment. It might be difficult enough that pausing AI is required to solve it (a position I’m sympathetic to anyways).
I just meant to communicate that if we solve alignment, the remaining problem is more like a very high-stakes version of getting the person you want elected president. It’s a very difficult task, but not a problem where the difficulty lies in conceptual confusion, or theoretical questions we don’t have answers to. But discussions about these post-asi topics usually treat it like that.