A smart human-like mind looking at all these pictures would (I claim) assemble them all into one big map of the world, like the original, either physically or mentally.
On my model, humans are pretty inconsistent about doing this.
I think humans tend to build up many separate domains of knowledge and then rarely compare them, and even believe opposite heuristics by selectively remembering whichever one agrees with their current conclusion.
For example, I once had a conversation about a video game where someone said you should build X “as soon as possible”, and then later in the conversation they posted their full build priority order and X was nearly at the bottom.
In another game, I once noticed that I had a presumption that +X food and +X industry are probably roughly equally good, and also a presumption that +Y% food and +Y% industry are probably roughly equally good, but that these presumptions were contradictory at typical food and industry levels (because +10% industry might end up being about 5 industry, but +10% food might end up being more like 0.5 food). I played for dozens of hours before realizing this.
On my model, humans are pretty inconsistent about doing this.
I’m old enough to have wanted to get from point A to point B in a city for which I literally had a torn map in a bag (I mean, it was in 2 pieces). I can’t imagine a human experienced with paper maps who would not figure it out… But I would not put it beyond a robot powered by a current-gen LLM to screw it up ~half the time.
these presumptions were contradictory … for dozens of hours before realizing this.
When you did realize this, eventually, did you feel like you were maximally smart already (no improvement possible) or did you feel like you want to at least try to not make the same mistake tomorrow (without eating more calories per day and without forgetting how to tie your shoelaces)?
On my model, humans are pretty inconsistent about doing this.
I think humans tend to build up many separate domains of knowledge and then rarely compare them, and even believe opposite heuristics by selectively remembering whichever one agrees with their current conclusion.
For example, I once had a conversation about a video game where someone said you should build X “as soon as possible”, and then later in the conversation they posted their full build priority order and X was nearly at the bottom.
In another game, I once noticed that I had a presumption that +X food and +X industry are probably roughly equally good, and also a presumption that +Y% food and +Y% industry are probably roughly equally good, but that these presumptions were contradictory at typical food and industry levels (because +10% industry might end up being about 5 industry, but +10% food might end up being more like 0.5 food). I played for dozens of hours before realizing this.
I’m old enough to have wanted to get from point A to point B in a city for which I literally had a torn map in a bag (I mean, it was in 2 pieces). I can’t imagine a human experienced with paper maps who would not figure it out… But I would not put it beyond a robot powered by a current-gen LLM to screw it up ~half the time.
When you did realize this, eventually, did you feel like you were maximally smart already (no improvement possible) or did you feel like you want to at least try to not make the same mistake tomorrow (without eating more calories per day and without forgetting how to tie your shoelaces)?