Thanks for noticing and including a link to my post Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom). I’m not sure I’d describe it as primarily a critique of mild optimization/satisficing: it’s more pointing out a slightly larger point, that any value learner foolish enough to be prone to Goodharting, or unable to cope with splintered models or Knightian uncertainty in its Bayesian reasoning is likely to be bad at STEM, limiting how dangerous it can be (so fixing this is capabilities work as well as alignment work). But yes, that is also a critique of mild optimization/satisficing, or more accurately, a claim that it should become less necessary as your AIs become more STEM-capable, as long as they’re value learners (plus a suggestion of a more principled way to handle these problems in a Bayesian framework).
Thanks for noticing and including a link to my post Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom). I’m not sure I’d describe it as primarily a critique of mild optimization/satisficing: it’s more pointing out a slightly larger point, that any value learner foolish enough to be prone to Goodharting, or unable to cope with splintered models or Knightian uncertainty in its Bayesian reasoning is likely to be bad at STEM, limiting how dangerous it can be (so fixing this is capabilities work as well as alignment work). But yes, that is also a critique of mild optimization/satisficing, or more accurately, a claim that it should become less necessary as your AIs become more STEM-capable, as long as they’re value learners (plus a suggestion of a more principled way to handle these problems in a Bayesian framework).