I think that the sharp left turn is also relevant to ELK, if it leads to your system not generalizing from “questions humans can answer” to “questions humans can’t answer.” My suspicion is that our key disagreements with Nate are present in the case of solving ELK and are not isolated to handling high-stakes failures.
(However it’s frustrating to me that I can never pin down Nate or Eliezer on this kind of thing, e.g. are they still pessimistic if there were a low-stakes AI deployment in the sense of this post?)
I think that the sharp left turn is also relevant to ELK, if it leads to your system not generalizing from “questions humans can answer” to “questions humans can’t answer.” My suspicion is that our key disagreements with Nate are present in the case of solving ELK and are not isolated to handling high-stakes failures.
(However it’s frustrating to me that I can never pin down Nate or Eliezer on this kind of thing, e.g. are they still pessimistic if there were a low-stakes AI deployment in the sense of this post?)