I was talking with Ramana last week about the overall chances of making AI go well, and what needs to be done, and we both sorta surprised ourselves with how much the conclusion seemed to be “More work on inner alignment ASAP.” Then again I’m biased since that’s what I’m doing this month.
It’s something we need in order to do anything else, and of things like that, it seems near/at the bottom of my list if sorted by probability of the research community figuring it out.
I was talking with Ramana last week about the overall chances of making AI go well, and what needs to be done, and we both sorta surprised ourselves with how much the conclusion seemed to be “More work on inner alignment ASAP.” Then again I’m biased since that’s what I’m doing this month.
It’s something we need in order to do anything else, and of things like that, it seems near/at the bottom of my list if sorted by probability of the research community figuring it out.