On my model, humans are pretty inconsistent about doing this.
I’m old enough to have wanted to get from point A to point B in a city for which I literally had a torn map in a bag (I mean, it was in 2 pieces). I can’t imagine a human experienced with paper maps who would not figure it out… But I would not put it beyond a robot powered by a current-gen LLM to screw it up ~half the time.
these presumptions were contradictory … for dozens of hours before realizing this.
When you did realize this, eventually, did you feel like you were maximally smart already (no improvement possible) or did you feel like you want to at least try to not make the same mistake tomorrow (without eating more calories per day and without forgetting how to tie your shoelaces)?
I’m old enough to have wanted to get from point A to point B in a city for which I literally had a torn map in a bag (I mean, it was in 2 pieces). I can’t imagine a human experienced with paper maps who would not figure it out… But I would not put it beyond a robot powered by a current-gen LLM to screw it up ~half the time.
When you did realize this, eventually, did you feel like you were maximally smart already (no improvement possible) or did you feel like you want to at least try to not make the same mistake tomorrow (without eating more calories per day and without forgetting how to tie your shoelaces)?