Yeah, I agree with this. If you feed an LLM enough hints about the solution you believe is right, and it generates ten solutions, one of them will sound to you like the right solution.
For me, this is significantly different from the position I understood you to be taking. My push-back was essentially the same as
“has there been, across the world and throughout the years, a nonzero number of scientific insights generated by LLMs?” (obviously yes),
& I created the question to see if we could substantiate the “yes” here with evidence.
It makes somewhat more sense to me for your timeline crux to be “can we do this reliably” as opposed to “has this literally ever happened”—but the claim in your post was quite explicit about the “this has literally never happened” version. I took your position to be that this-literally-ever-happening would be significant evidence towards it happening more reliably soon, on your model of what’s going on with LLMs, since (I took it) your current model strongly predicts that it has literally never happened.
This strong position even makes some sense to me; it isn’t totally obvious whether it has literally ever happened. The chemistry story I referenced seemed surprising to me when I heard about it, even considering selection effects on what stories would get passed around.
There is a specific type of thinking, which I tried to gesture at in my original post, which I think LLMs seem to be literally incapable of. It’s possible to unpack the phrase “scientific insight” in more than one way, and some interpretations fall on either side of the line.
Yeah, I agree with this. If you feed an LLM enough hints about the solution you believe is right, and it generates ten solutions, one of them will sound to you like the right solution.
For me, this is significantly different from the position I understood you to be taking. My push-back was essentially the same as
& I created the question to see if we could substantiate the “yes” here with evidence.
It makes somewhat more sense to me for your timeline crux to be “can we do this reliably” as opposed to “has this literally ever happened”—but the claim in your post was quite explicit about the “this has literally never happened” version. I took your position to be that this-literally-ever-happening would be significant evidence towards it happening more reliably soon, on your model of what’s going on with LLMs, since (I took it) your current model strongly predicts that it has literally never happened.
This strong position even makes some sense to me; it isn’t totally obvious whether it has literally ever happened. The chemistry story I referenced seemed surprising to me when I heard about it, even considering selection effects on what stories would get passed around.
There is a specific type of thinking, which I tried to gesture at in my original post, which I think LLMs seem to be literally incapable of. It’s possible to unpack the phrase “scientific insight” in more than one way, and some interpretations fall on either side of the line.
Yeah, that makes sense.