There’s an entire class of problem within ML that I would see as framing problems and the one thing I think LLMs don’t help that much with is framing.
I don’t believe that these will be solved within the scaling paradigms that have been hypothesised. (Related to what Archimedes linked from Epoch, not only in training data but theoretical modelling for fitting on that training data.)
There’s this quote I’ve been seeing from Situation Awareness that all you have to do is “believe in a straight line on a curve” and when I hear that and see the general trend extrapolations my spider senses start tingling. In the frame of the model, the assumptions of shorter timelines make sense, if you reject the frame you start seeing holes.
Those holes are more like open scientific questions that no one has answered but it raises the variance of timelines by quite a lot.
There’s an entire class of problem within ML that I would see as framing problems and the one thing I think LLMs don’t help that much with is framing.
Could you say more about this? What do you mean by framing in this context?
There’s this quote I’ve been seeing from Situation Awareness that all you have to do is “believe in a straight line on a curve” and when I hear that and see the general trend extrapolations my spider senses start tingling.
Yeah that’s not really compelling to me either. SitA didn’t move my timelines. Curious if you have engaged with the benchmarks+gaps argument to AI R&D automation (timelines forecast), and then the AI algorithmic progress that would drive (takeoff forecast). These are the things that actually moved my view.
There’s an entire class of problem within ML that I would see as framing problems and the one thing I think LLMs don’t help that much with is framing.
I don’t believe that these will be solved within the scaling paradigms that have been hypothesised. (Related to what Archimedes linked from Epoch, not only in training data but theoretical modelling for fitting on that training data.)
There’s this quote I’ve been seeing from Situation Awareness that all you have to do is “believe in a straight line on a curve” and when I hear that and see the general trend extrapolations my spider senses start tingling. In the frame of the model, the assumptions of shorter timelines make sense, if you reject the frame you start seeing holes.
Those holes are more like open scientific questions that no one has answered but it raises the variance of timelines by quite a lot.
If you want to engage more fully with skeptics, I really liked going to ICML last year so I can recommend it. Also, see this comment for some more details: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/#nQAXHms3JCJ9meBey
Could you say more about this? What do you mean by framing in this context?
Yeah that’s not really compelling to me either. SitA didn’t move my timelines. Curious if you have engaged with the benchmarks+gaps argument to AI R&D automation (timelines forecast), and then the AI algorithmic progress that would drive (takeoff forecast). These are the things that actually moved my view.
Thanks for the link, that’s compelling.