I am somewhat confused how somebody could think they have made a major breakthrough in computer science, without being able to run some algorithm that does something impressive.
Imagine being confused if you got an algorithm that solves some path finding problem. You run your algorithm to solve path finding problems, and either it doesn’t work, or is to slow, or it actually works.
Or imagine you think you found a sorting algorithm that is somehow much faster than quick sort. You just run it and see if that is actually the case.
It seems like “talking to reality” is really the most important step. Somehow it’s missing from this article. Edit: Actually it is in step 2. I am just bad at skim reading.
Granted the above does not work as well for theoretical computer science. It seems easier to be confused about if your math is right, than if your algorithm efficiently solves a task. But still math is pretty good at showing you when something doesn’t make sense, if you look carefully enough. It let’s you look at “logical reality”.
The way to not get lead to believe false things really doesn’t seem different, whether you use an LLM or not. Probably an LLM triggers some social circuits in your brain that makes it more likely to be falsely confident. But this does seem more like a quantitative than qualitative difference.
This is a useful video to me. I am somehow surprised that physics crackpots exist to the extend that this is a know concept. I actually knew this before, but failed to relate it to this article and my previous comment.
I once thought I had solved P=NP. And that seemed very exciting. There was some desire to just tell some other people I trust. I had some clever way to transform SAT problems into a form that is tractable. Of cause later I realized that transforming solutions of the tractable problem form back into SAT was NP hard. I had figured out how to take a SAT problem and turn it into an easy problem that was totally not equivalent to the SAT problem. And then I marveled at how easy it was to solve the easy problem.
My guess at what is going on in a crackpots head is probably exactly this. They come up with a clever idea that they can’t tell how it fails. So it seems amazing. Now they want to tell everybody, and well do so. That seems to be what makes a crackpot a crackpot. Being overwhelmed by excitement and sharing their thing, without trying to figure out how it fails. And intuitively it really really feels like it should work. You can’t see any flaw.
So it feels like one of the best ways to avoid being a crackpot is to try to solve a bunch of hard problems, and fail in a clear way. Then when solving a hard problem your prior is “this is probably not gonna work at all” even when intuitively it feels like it totally should work.
It would be interesting to know how many crackpots are repeated offenders.
They come up with a clever idea that they can’t tell how it fails. So it seems amazing...And intuitively it really really feels like it should work. You can’t see any flaw.
I do think this is an important aspect. Turing award winner Tony Hoare once said,
There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors.
and I think there’s a similar dynamic when people try to develop scientific theories.
I think that’s true, but the addition of LLMs at their current level of capability has added some new dynamics, resulting in a lot of people believing they have a breakthrough who previously wouldn’t. For people who aren’t intimately familiar with the failure modes of LLMs, it’s easy to believe them when they say your work is correct and important — after all, they’re clearly very knowledgeable about science. And of course, confirmation bias makes that much easier to fall for. Add to that the tendency for LLMs to be sycophantic, and it’s a recipe for a greatly increased number of people (wild guess: maybe an order of magnitude more?) believing they’ve got a breakthrough.
I am somewhat confused how somebody could think they have made a major breakthrough in computer science, without being able to run some algorithm that does something impressive.
Imagine being confused if you got an algorithm that solves some path finding problem. You run your algorithm to solve path finding problems, and either it doesn’t work, or is to slow, or it actually works.
Or imagine you think you found a sorting algorithm that is somehow much faster than quick sort. You just run it and see if that is actually the case.
It seems like “talking to reality” is really the most important step.
Somehow it’s missing from this article.Edit: Actually it is in step 2. I am just bad at skim reading.Granted the above does not work as well for theoretical computer science. It seems easier to be confused about if your math is right, than if your algorithm efficiently solves a task. But still math is pretty good at showing you when something doesn’t make sense, if you look carefully enough. It let’s you look at “logical reality”.
The way to not get lead to believe false things really doesn’t seem different, whether you use an LLM or not. Probably an LLM triggers some social circuits in your brain that makes it more likely to be falsely confident. But this does seem more like a quantitative than qualitative difference.
there’s a video I like from angela collier, a physicist, on this topic—full video: https://www.youtube.com/watch?v=11lPhMSulSU / summary: https://claude.ai/share/0b3dc444-0489-42a2-9a72-df22b58589e9 - this phenomenon has been around for a while, it seems to mainly get focused on fields where there’s dramatic fame as the person who made some incredible breakthrough to be had.
Talking to reality doesn’t seem missing from OP’s article, though? it’s the preregistration part
This is a useful video to me. I am somehow surprised that physics crackpots exist to the extend that this is a know concept. I actually knew this before, but failed to relate it to this article and my previous comment.
I once thought I had solved P=NP. And that seemed very exciting. There was some desire to just tell some other people I trust. I had some clever way to transform SAT problems into a form that is tractable. Of cause later I realized that transforming solutions of the tractable problem form back into SAT was NP hard. I had figured out how to take a SAT problem and turn it into an easy problem that was totally not equivalent to the SAT problem. And then I marveled at how easy it was to solve the easy problem.
My guess at what is going on in a crackpots head is probably exactly this. They come up with a clever idea that they can’t tell how it fails. So it seems amazing. Now they want to tell everybody, and well do so. That seems to be what makes a crackpot a crackpot. Being overwhelmed by excitement and sharing their thing, without trying to figure out how it fails. And intuitively it really really feels like it should work. You can’t see any flaw.
So it feels like one of the best ways to avoid being a crackpot is to try to solve a bunch of hard problems, and fail in a clear way. Then when solving a hard problem your prior is “this is probably not gonna work at all” even when intuitively it feels like it totally should work.
It would be interesting to know how many crackpots are repeated offenders.
I do think this is an important aspect. Turing award winner Tony Hoare once said,
and I think there’s a similar dynamic when people try to develop scientific theories.
I think that’s true, but the addition of LLMs at their current level of capability has added some new dynamics, resulting in a lot of people believing they have a breakthrough who previously wouldn’t. For people who aren’t intimately familiar with the failure modes of LLMs, it’s easy to believe them when they say your work is correct and important — after all, they’re clearly very knowledgeable about science. And of course, confirmation bias makes that much easier to fall for. Add to that the tendency for LLMs to be sycophantic, and it’s a recipe for a greatly increased number of people (wild guess: maybe an order of magnitude more?) believing they’ve got a breakthrough.