I’m pretty sure a human brain could, in principle, visualize a 4D space just as well as it visualizes a 3D space, and that there are ways to make that happen via neurotech (as an upper bound on difficulty).
Consider: we know a lot about how 4-dimensional spaces behave mathematically, probably no less than how 3-dimensional spaces work. Once we know exactly how the brain encodes and visualizes a 3D space in its neurons, we probably also understand how it would do it for a 4D space if it had sensory access to it. Given good enough neurotech, we could manually craft the circuits necessary to reason intuitively in 4D.
Also, another insight/observation: insofar as AIs can have imagination, an AI trained in a 4D environment should develop 4D imagination (i.e., the circuits necessary to navigate and imagine 4D intuitively). The same should be true about human-brain emulations in 4D simulations.
This argument seems to work for N-D space for any N which doesn’t seem right. I think we definitely do know less about 4D space than 3D, partly because we’re much more interested in 3D, partly because there’s just (a lot) more going on in 4D.
Intuitively it feels like current AI should be much better at learning navigation in 4D than human brains. Brains have real architecture-level, baked in task-specific circuits, which AI lack, and reconstructing a 3D world is arguably the most important of those. Sure, you could modify them with neurotech to change that, but you could do that for virtually any task so it doesn’t seem very meaningful.
There’s also the problem that human sensors are inherently 3D. It’s not clear how you would translate eyes into 4D. If you do pick a way to do this, and leave visual processing circuits the same, the circuits aren’t getting their expected data stream anymore. Brains are clearly pretty good at coping with this, like in blind people where visual processing circuits are (at least partially) co-opted for other things, but blind people are clearly worse at navigating the 3D world than sighted people, and it seems like the same would be true for humans vs 4D-native beings (like AI).
Ah, I get what you are saying, and I agree. It’s possible the human brain architecture, as-is, can’t process 4D, but I guess we’re mismatched in what we think is interesting. The thrust of my intuition here was more “wow, someone could understand N-D intuitively in a 3D universe, this doesn’t seem prohibited”, regardless of whether it’s the same architecture of a human brain exactly. Like, the human brain as it is right now might not permit that, and neurotech might involve doing a lot of architectural changes (the same applies to emulations). I suppose it’s a lot less interesting an insight if you already buy that imagining higher dimensions from a 3D universe is in principle possible. The human brain being able to do that is a stronger claim that would have been more interesting if I actually managed to defend it well.
I suppose I was kinda sloppy saying “the human brain can do that”—I should have said “the human brain arbitrarily modified” or something like that.
I definitely think it’s interesting that it’s possible for N-D-substrate-computations to imagine / intuit N+1-D, but yeah, I feel like that’s mostly a given because we have the concept of N+1-D in the first place.
There are different levels of “imagine / intuit” though. Some people have particularly good or bad intuition for the 3D space we live in. I took your claim to be something like “the average brain could intuit 4D just as well as 3D, maybe requiring slight modification”. I think the modifications to reach true parity would be pretty extensive, because of how much 3D-specific architecture (as opposed to weights) human brains have. I do agree the modifications are theoretically possible, but the modifications to give a fruit fly human-level cognition are also theoretically possible with arbitrary modification.
Thought experiment: If a mad scientist gave a newborn infant a third eye that was offset along a fourth spatial dimension from the baby’s other two eyes, the baby’s brain would naturally acquire the ability to visualize in four dimensions. Wiring up three eyes probably requires three visual cortices, which will have knock-on effects on the overall geometry of the brain. I doubt that it requires the brain itself to be a 4D structure though.
I’m pretty sure a human brain could, in principle, visualize a 4D space just as well as it visualizes a 3D space, and that there are ways to make that happen via neurotech (as an upper bound on difficulty).
Consider: we know a lot about how 4-dimensional spaces behave mathematically, probably no less than how 3-dimensional spaces work. Once we know exactly how the brain encodes and visualizes a 3D space in its neurons, we probably also understand how it would do it for a 4D space if it had sensory access to it. Given good enough neurotech, we could manually craft the circuits necessary to reason intuitively in 4D.
Also, another insight/observation: insofar as AIs can have imagination, an AI trained in a 4D environment should develop 4D imagination (i.e., the circuits necessary to navigate and imagine 4D intuitively). The same should be true about human-brain emulations in 4D simulations.
This argument seems to work for N-D space for any N which doesn’t seem right. I think we definitely do know less about 4D space than 3D, partly because we’re much more interested in 3D, partly because there’s just (a lot) more going on in 4D.
Intuitively it feels like current AI should be much better at learning navigation in 4D than human brains. Brains have real architecture-level, baked in task-specific circuits, which AI lack, and reconstructing a 3D world is arguably the most important of those. Sure, you could modify them with neurotech to change that, but you could do that for virtually any task so it doesn’t seem very meaningful.
There’s also the problem that human sensors are inherently 3D. It’s not clear how you would translate eyes into 4D. If you do pick a way to do this, and leave visual processing circuits the same, the circuits aren’t getting their expected data stream anymore. Brains are clearly pretty good at coping with this, like in blind people where visual processing circuits are (at least partially) co-opted for other things, but blind people are clearly worse at navigating the 3D world than sighted people, and it seems like the same would be true for humans vs 4D-native beings (like AI).
Ah, I get what you are saying, and I agree. It’s possible the human brain architecture, as-is, can’t process 4D, but I guess we’re mismatched in what we think is interesting. The thrust of my intuition here was more “wow, someone could understand N-D intuitively in a 3D universe, this doesn’t seem prohibited”, regardless of whether it’s the same architecture of a human brain exactly. Like, the human brain as it is right now might not permit that, and neurotech might involve doing a lot of architectural changes (the same applies to emulations). I suppose it’s a lot less interesting an insight if you already buy that imagining higher dimensions from a 3D universe is in principle possible. The human brain being able to do that is a stronger claim that would have been more interesting if I actually managed to defend it well.
I suppose I was kinda sloppy saying “the human brain can do that”—I should have said “the human brain arbitrarily modified” or something like that.
I definitely think it’s interesting that it’s possible for N-D-substrate-computations to imagine / intuit N+1-D, but yeah, I feel like that’s mostly a given because we have the concept of N+1-D in the first place.
There are different levels of “imagine / intuit” though. Some people have particularly good or bad intuition for the 3D space we live in. I took your claim to be something like “the average brain could intuit 4D just as well as 3D, maybe requiring slight modification”. I think the modifications to reach true parity would be pretty extensive, because of how much 3D-specific architecture (as opposed to weights) human brains have. I do agree the modifications are theoretically possible, but the modifications to give a fruit fly human-level cognition are also theoretically possible with arbitrary modification.
Thought experiment: If a mad scientist gave a newborn infant a third eye that was offset along a fourth spatial dimension from the baby’s other two eyes, the baby’s brain would naturally acquire the ability to visualize in four dimensions. Wiring up three eyes probably requires three visual cortices, which will have knock-on effects on the overall geometry of the brain. I doubt that it requires the brain itself to be a 4D structure though.