That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.
That depends: what are you talking about? I seem to recall you defined the term as something that Eliezer might agree with. If you’ve risen to the level of clear disagreement, I haven’t seen it.
A good refinement of the question is how you think AI could go wrong (that being Eliezer’s field) if we reject whatever you’re asking about.
You would have the exact failure mode you are already envisaging...clippies and so on. OMC is a way .AI would not go wrong. MIRI needs to argue it is unfeasible or unlikely to show that uFAI is likely.