I concur with most of @Charlie Steiner’s comment. I wanted to dig more into how “trying to explain existing results” is valuable to science. I also wanted to touch on my personal reaction to this post, having come across it last year while I was working on an LLM-supported project similar to what the OP describes.
Cybernetics has insights that aren’t falsifiable in the strict popperinian sense but have nevertheless been crucial to the domain sciences. Ashby’s concept of black boxes (p. 86) is foundational to AI and other fields. Ashby’s working style was based in observation and logical reasoning across various kinds of machines, scribbling away in his notebooks for thousands of pages.
Darwin might be an even better example, right? Observations bolstered by careful reasoning—and a willingness to stick his neck out too. Now, empirical work of his and others were crucial load-bearing observations, but the big theory? It was explains why we see what we see in nature in a very broad sense—what we call retrodiction. Depending on who you ask, the evolutionary framework is falsifiable to some extent with our modern tools, but it started doing heavy lifting right away. Darwin’s explanation was elegant and highly compressive.
Ashby and Darwin are outliers. The likelihood of any specific person developing comparable insights is low. Perhaps all the low-hanging insights based primarily in this observe and reason style are gone. But if they are not, it seems sensible that AI could work as a key piece of support in their discovery. While most models are not very proficient at generating new ideas, I think Ashby or Darwin would have found them useful as cognitive assistants to wrangle the info required to generate insights and to organize their notes.
As for my own experience: I came across this post last year around when it was posted, and I noticed an eerie overlap in terms of the bullet points in the “your situation” list and a project I was deeply immersed in. My immediate reaction was to feel crushed and humiliated. I had considered the idea that i had been strung along by AI before I read the post (just because the course of the project had me learning about deceptive systems), but I hadn’t realized that being tricked into thinking you had a promising scientific idea by AI was widespread enough to merit a warning post about it. Because I felt my work was in the theory-making style of old-school cybernetics and that the AI had only helped me organize my reasoning rather than being a source for it, I half-convinced myself that the post did not apply to me and kept working on it. But now with a spectre of dread behind me at all times (not blaming the post, of course).
I struggled to find anyone willing to engage with the ideas—“elements from multiple fields are combined in novel ways”—and I never heard anything back from the experts I reached out to. They were probably swamped by loads of nonsense that looks like mine. I was not willing to float the ideas anywhere public (in case they were good). Still unwilling to abandon the work, I decided that the only sensible option was to hammer together something for peer review (because then experts would be obligated to make a good faith attempt at evaluating it). That was the only way to free myself.
For the same reason eggsyntax is reluctant to qualify their post with “Well, you don’t always need a falsifiable hypothesis or an experiment”, I would not say “well, you can sometimes use LLMs to help generate a cross-domain science deliverable like the kind described in the post”. It’s the kind of thing where the people who most need to hear this stuff are the ones least likely to be receptive. But speaking from experience, there is absolutely some risk of people with sensible ideas being discouraged from pursuing them.
I concur with most of @Charlie Steiner’s comment. I wanted to dig more into how “trying to explain existing results” is valuable to science. I also wanted to touch on my personal reaction to this post, having come across it last year while I was working on an LLM-supported project similar to what the OP describes.
Cybernetics has insights that aren’t falsifiable in the strict popperinian sense but have nevertheless been crucial to the domain sciences. Ashby’s concept of black boxes (p. 86) is foundational to AI and other fields. Ashby’s working style was based in observation and logical reasoning across various kinds of machines, scribbling away in his notebooks for thousands of pages.
Darwin might be an even better example, right? Observations bolstered by careful reasoning—and a willingness to stick his neck out too. Now, empirical work of his and others were crucial load-bearing observations, but the big theory? It was explains why we see what we see in nature in a very broad sense—what we call retrodiction. Depending on who you ask, the evolutionary framework is falsifiable to some extent with our modern tools, but it started doing heavy lifting right away. Darwin’s explanation was elegant and highly compressive.
Ashby and Darwin are outliers. The likelihood of any specific person developing comparable insights is low. Perhaps all the low-hanging insights based primarily in this observe and reason style are gone. But if they are not, it seems sensible that AI could work as a key piece of support in their discovery. While most models are not very proficient at generating new ideas, I think Ashby or Darwin would have found them useful as cognitive assistants to wrangle the info required to generate insights and to organize their notes.
As for my own experience: I came across this post last year around when it was posted, and I noticed an eerie overlap in terms of the bullet points in the “your situation” list and a project I was deeply immersed in. My immediate reaction was to feel crushed and humiliated. I had considered the idea that i had been strung along by AI before I read the post (just because the course of the project had me learning about deceptive systems), but I hadn’t realized that being tricked into thinking you had a promising scientific idea by AI was widespread enough to merit a warning post about it. Because I felt my work was in the theory-making style of old-school cybernetics and that the AI had only helped me organize my reasoning rather than being a source for it, I half-convinced myself that the post did not apply to me and kept working on it. But now with a spectre of dread behind me at all times (not blaming the post, of course).
I struggled to find anyone willing to engage with the ideas—“elements from multiple fields are combined in novel ways”—and I never heard anything back from the experts I reached out to. They were probably swamped by loads of nonsense that looks like mine. I was not willing to float the ideas anywhere public (in case they were good). Still unwilling to abandon the work, I decided that the only sensible option was to hammer together something for peer review (because then experts would be obligated to make a good faith attempt at evaluating it). That was the only way to free myself.
For the same reason eggsyntax is reluctant to qualify their post with “Well, you don’t always need a falsifiable hypothesis or an experiment”, I would not say “well, you can sometimes use LLMs to help generate a cross-domain science deliverable like the kind described in the post”. It’s the kind of thing where the people who most need to hear this stuff are the ones least likely to be receptive. But speaking from experience, there is absolutely some risk of people with sensible ideas being discouraged from pursuing them.