Valid complaint, honestly. I wasn’t really going for “good observables to watch out for” there, though, just for making the point that my current model is at all falsifiable (which is I think what @Jman9107 was mostly angling for, no?).
The type of evidence I expect to actually end up updating on, in real life, if we are in the LLMs-are-AGI-complete timeline, is this one:
Reasoning models’ skills starting to generalize in harder-to-make-legible ways that look scary to me.
Some sort of subtle observable or argument that’s currently an unknown unknown to me, which will make me think about it a bit and realize it upends my whole model.
Valid complaint, honestly. I wasn’t really going for “good observables to watch out for” there, though, just for making the point that my current model is at all falsifiable (which is I think what @Jman9107 was mostly angling for, no?).
The type of evidence I expect to actually end up updating on, in real life, if we are in the LLMs-are-AGI-complete timeline, is this one:
Some sort of subtle observable or argument that’s currently an unknown unknown to me, which will make me think about it a bit and realize it upends my whole model.