I think it’s basically correct to say that evolution is mostly designing a within-lifetime learning algorithm (searching over neural architecture, reward function, etc.), and I argue about it all the time, see here and here. But there’s another school of thought (e.g. Steven Pinker, or Cosmides & Tooby) where within-lifetime learning is not so important—see here. I think Eliezer & Nate are closer to the second school of thought, and some disagreements stem from that.
I do think there are “obvious” things that one can say about learning algorithms in general for which evolution provides a perfectly fine example. E.g. “if I run an RL algorithm with reward function R, the trained model will not necessarily have an explicit endorsed goal to maximize R (or even know what R is)”. If you think invoking evolution as an example has too much baggage, fine, there are other examples or arguments that would also work.
I think it’s basically correct to say that evolution is mostly designing a within-lifetime learning algorithm (searching over neural architecture, reward function, etc.), and I argue about it all the time, see here and here. But there’s another school of thought (e.g. Steven Pinker, or Cosmides & Tooby) where within-lifetime learning is not so important—see here. I think Eliezer & Nate are closer to the second school of thought, and some disagreements stem from that.
I do think there are “obvious” things that one can say about learning algorithms in general for which evolution provides a perfectly fine example. E.g. “if I run an RL algorithm with reward function R, the trained model will not necessarily have an explicit endorsed goal to maximize R (or even know what R is)”. If you think invoking evolution as an example has too much baggage, fine, there are other examples or arguments that would also work.