Do people just read really fast? I think they have some heuristics for figuring out what parts to read and how to skim, which maybe involves something like binary search and tracking-abstraction-borders. But something about this still feels opaque to me.
Even if you have poor heuristics, it’s still may be worth it to google/open docs and walk obvious links. The point is not to have an algorithm that certainly finds everything relevant, but to try many things that may work.
How do you learn to replicate bugs, when they happen inconsistently in no discernable pattern? especially when the bug comes up, like, once every couple days or weeks, instead of once every 5 minutes.
You speed up time. Or more generally prepare an environment that increases reproduction frequency, like slow hardware/network or higher load. You spam clicks and interrupt every animation, because all bugs are about asynchronous things. You save state after reproduction, or better before and start from it. If all fails, you add logs/breakpoints to be ready next week. But usually you just look at code to figure out paths that may manifest as your bug and then try to reproduce promising paths.
Personally, I’m against unqualified usage of “uncertainty” in relation to consciousness—it conflates factual uncertainty with ethical uncertainty. And it mostly ethical uncertainty that needs to be worked on: it’s not like there couldn’t be relevant questions about specifics of information processing in LLMs, but without consensus about what we value in humans empirical research in the name of helping with ethical questions is mostly a distraction motivated by moral-realistic thinking. We have enough knowledge about brain to decide on solutions to simple ethical questions now—you don’t need more neuroscience to decide why you wouldn’t value global workspace implemented in a couple of lines of code that takes 2 MB of RAM.