Like, a lot of my knowledge of random facts about the world is downstream of having asked LLMs about it.
Uhhh… that seems maybe really bad. Do you sometimes do the kind of check which, if it were applied to The New York Times pre-AI, would be sufficient to make Gell-Mann Amnesia obvious?
Personally, the most I’ve relied on LLMs for a research project was the project behind this shortform in February 2025, and in hindsight (after reading up on some parts more without an LLM) I think I ended up with a very misleading big picture as a result. I no longer use LLMs for open-ended learning like that; it was worth trying but not a good idea in hindsight.
Uhhh… that seems maybe really bad. Do you sometimes do the kind of check which, if it were applied to The New York Times pre-AI, would be sufficient to make Gell-Mann Amnesia obvious?
Personally, the most I’ve relied on LLMs for a research project was the project behind this shortform in February 2025, and in hindsight (after reading up on some parts more without an LLM) I think I ended up with a very misleading big picture as a result. I no longer use LLMs for open-ended learning like that; it was worth trying but not a good idea in hindsight.