Thanks for your patience: I do think this message makes your point clearly. However, I’m sorry to say, I still don’t think I was missing the point. I reviewed §1.5, still believe I understand the open-ended autonomous learning distribution shift, and also find it scary. I also reviewed §3.7, and found it to basically match my model, especially this bit:
Or, of course, it might be more gradual than literally a single run with a better setup. Hard to say for sure. My money would be on “more gradual than literally a single run”, but my cynical expectation is that the (maybe a couple years of) transition time will be squandered
Overall, I don’t have the impression we disagree too much. My guess for what’s going on (and it’s my fault) is that my initial comment’s focus on scaling was not a reaction to anything you said in your post, in fact you didn’t say much about scaling at all. It was more a response to the scaling discussion I see elsewhere.
Thanks for your patience: I do think this message makes your point clearly. However, I’m sorry to say, I still don’t think I was missing the point. I reviewed §1.5, still believe I understand the open-ended autonomous learning distribution shift, and also find it scary. I also reviewed §3.7, and found it to basically match my model, especially this bit:
Overall, I don’t have the impression we disagree too much. My guess for what’s going on (and it’s my fault) is that my initial comment’s focus on scaling was not a reaction to anything you said in your post, in fact you didn’t say much about scaling at all. It was more a response to the scaling discussion I see elsewhere.