There’s a nice brain-like vision model here, and it even parses optical illusions in the same way people do. As far as I understand it, if there’s a sudden change of, um, color, or whatever it is for migraine aura, it has to be (1) an edge of a thing, (2) an edge of an occluding thing, (3) a change of texture within a single surface (e.g. wallpaper). When you block a head with your hand, your visual system obviously and correctly parses it as (2). But here there’s no occluder model that fits all the visual input data—maybe because some of the neurons that would offer evidence of an occluding shape are messed up and not sending those signals. So (2) doesn’t fit the data. And there’s no single-surface theory that fits all the visual input data either, so (3) gets thrown out too. So eventually the visual system settles on (1) as the best (least bad) parsing of the scene.
I would conjecture that if we directly stimulated the retina to reproduce the shapes and colors of migrate auras, the brain would correctly see it as an occlusion, and thus, correctly infer the existence of occluded heads etc.
My hypothesis is that the migraine aura is actually injected at an intermediate abstraction level. (After all, it’s not something happening on the physical retina, right?) It therefore interferes with the object representations themselves, rather than providing new low-level data for the brain to interpret normally.
I agree with that, as long as “intermediate abstraction level” is sufficiently broad so as to also include V1. When I wrote “some of the neurons...are messed up and not sending those signals” I was mostly imagining neurons in V1. Admittedly it could also be neurons in V2 or something. I dunno. I agree with you that it’s unlikely to originate before V1, i.e. retina or LGN (=the thalamus waystation between retina and V1). (Not having thought too hard about it.)
(My vague impression is that the lateral connections within V1 are doing a lot of the work in finding object boundaries.)
There’s a nice brain-like vision model here, and it even parses optical illusions in the same way people do. As far as I understand it, if there’s a sudden change of, um, color, or whatever it is for migraine aura, it has to be (1) an edge of a thing, (2) an edge of an occluding thing, (3) a change of texture within a single surface (e.g. wallpaper). When you block a head with your hand, your visual system obviously and correctly parses it as (2). But here there’s no occluder model that fits all the visual input data—maybe because some of the neurons that would offer evidence of an occluding shape are messed up and not sending those signals. So (2) doesn’t fit the data. And there’s no single-surface theory that fits all the visual input data either, so (3) gets thrown out too. So eventually the visual system settles on (1) as the best (least bad) parsing of the scene.
I dunno, something like that, I guess.
I would conjecture that if we directly stimulated the retina to reproduce the shapes and colors of migrate auras, the brain would correctly see it as an occlusion, and thus, correctly infer the existence of occluded heads etc.
My hypothesis is that the migraine aura is actually injected at an intermediate abstraction level. (After all, it’s not something happening on the physical retina, right?) It therefore interferes with the object representations themselves, rather than providing new low-level data for the brain to interpret normally.
I agree with that, as long as “intermediate abstraction level” is sufficiently broad so as to also include V1. When I wrote “some of the neurons...are messed up and not sending those signals” I was mostly imagining neurons in V1. Admittedly it could also be neurons in V2 or something. I dunno. I agree with you that it’s unlikely to originate before V1, i.e. retina or LGN (=the thalamus waystation between retina and V1). (Not having thought too hard about it.)
(My vague impression is that the lateral connections within V1 are doing a lot of the work in finding object boundaries.)