I more meant “he’s probably thinking about this in the back of his mind fairly often”, as well as trying to be humourous.
He’s also indicated that he doesn’t think anyone should work on AI until the goal system stability problem is solved, which he’s talked about thinking about but hasn’t published anything on, which probably means he’s stuck.
Do you know what he would think of work that has a small chance of solving goal stability and a slightly larger chance of helping with AI in general? This seems like a net plus to me, but you seem to have heard what he thinks should be studied from a slightly clearer source than I did.
I more meant “he’s probably thinking about this in the back of his mind fairly often”, as well as trying to be humourous.
Do you know what he would think of work that has a small chance of solving goal stability and a slightly larger chance of helping with AI in general? This seems like a net plus to me, but you seem to have heard what he thinks should be studied from a slightly clearer source than I did.