Stratton’s perceptual adaptation experiments a century ago have shown that the brain can adapt to different kinds of visual information, e.g. if you wear glasses that turn the picture upside down, you eventually adjust and start seeing it right side up again. And recently some people have been experimenting with augmented senses, like wearing an anklet with cell phone vibrators that lets you always know which way is north.
I wonder if we can combine these ideas? For example, if you always carry a wearable camera on your wrist and feed the information to a Google Glass-like display, will your brain eventually adapt to having effectively three eyes, one of which is movable? Will you gain better depth perception, a better sense of your surroundings, a better sense of what you look like, etc.?
One aspect of perceptual adaptation I do not often hear emphasized is the role of agency. I first encountered it in this passage:
The first hours were very difficult; nobody could move freely or do anything without going very slowly and trying to figure out and make sense of what he or she saw. Then something unexpected happened: Everything about their bodies and the immediate vicinity that they were touching began to look as before, but everything which could not be touched continued to be inverted. Gradually, by groping and touching while moving around to attain the satisfaction of normal needs, participants in the experiment found that objects further a field began to appear normal to the participants in the experiment. In a few weeks, everything looked the right way up, and they could all do everything without any special attention or care. At one point in the experiment snow began to fall. Kohler looked through the window and saw the flakes rising from the earth and moving upwards. He went out, stretched out his hands, palms upwards, and felt the snow falling on them. After only a few moments of feeling the snow touch his palms, he began to see the snow falling instead of rising.
There have been other experiments with inverted spectacles. One carried out in the United States involved two people, one sitting in a wheelchair and the other pushing it, both fitted with such special glasses. The one who moved around by pushing the chair began to see normally, and after a few hours, was able to find his way without groping, while the one sitting continued to see everything the wrong way.
--- Moshe Feldenkrais, “Man and World,” in “Explorers of Humankind,” ed Thomas Hanna
I read about an experiment (no link, sorry) where people wore helmets that gave them a 360 degree view of their surroundings. They were apparently able to adapt quite well, and could eventually do thinks like grab a ball tossed to them from behind without turning around.
from my experience with focusing on the senses I already have the mere availability of the data is not sufficient. You really need to process it. The glass intervention works well because it also takes away the primary way of interacting with the world. If you only add sense most of it can be pretty much ignored as it doesn’t bring any compelling extra value except for being cool for a while. Color TV was kinda nice improvement but not many are jumping on the 3D bandwagon.
So if you really want to go three eyed it could be a good bet it could be good from sense development perspective to go only new-eye mono for a while. Another one would be have a environment where the new capabilities are difference makingly handy. I could imagine that fixing and taking apart computers could benefit from that kind of sensing. You could also purposefully make a multilayered desk so that simply looking what is on the desk would require hand movement but many more documents could be open at any time.
Your brain is already mostly filtering out the massive amount of input it takes, making it quite expensive to make it bother paying attention to yet another sense-datum The sense would also require their own “drivers”. I could imagine that managing a moveable eye would be more laboursome than eye focus micro. Having a fixed separation of your viewpoints makes the calculations easy routine. That would have to be expanded into a more general approach for variable separation. There is a camera trick where you change your zoom while simultanously moving the camera in the forward backward dimension keeping the size of the primary target fixed but stretching the perspective. Big variance to the viewpoint separation would induce similar effects. I could imagine how it could be nausea-inducing instead of only cool. Increased mental labour and confusion atleast on the short-term would press against adopting a more expanded sensory experience. Therefore if such transition is wanted it is important to bring the tempting good sides concrete in the practical experience.
Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can’t see them? Now apply this to firemen. or infantry. This is the startup i’d be doing if I wasn’t doing what I’m doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.
whatever team state matters. maybe online/offline, maybe emotional states, maybe doing biofeedback (hormones? alpha waves?) but cross-team. maybe just ‘how many production bugs we’ve had this week’.
but if we’re talking startups, I’d probably look at where the money is and go there. Can this be applied to groups of traders? c-level executives? medical teams? maybe some other target group are both flush with cash and early adopters of new tech?
I have magnets implanted into two of my fingertips, which extend my sense of touch to be able to feel electromagnetic fields. I did an AMA on reddit a while ago that answers most of the common questions, but I’d be happy to answer any others.
To touch on it briefly, alternating fields feel like buzzing, and static fields feel like bumps or divots in space. It’s definitely become a seamless part of my sensory experience, although most of the time I can’t tell it’s there because ambient fields are pretty weak.
There’s already some brain plasticity research which does this for people who have lost senses. Can’t remember a specific example, but I know there are quite a few in the book “The Brain That Changes Itself”
I would guess strongly (75%) that the answer is yes. There are incredible stories about people’s brains adapting to new inputs. There is one paper in the neuroscience literature that showed how if you connect a video input to a blind cat’s auditory cortex, that brain region will adapt new neural structures that are usually associated with vision (like edge detectors).
This makes me wonder what could be done with, say, a bluetooth earbud and a smartphone, both of which are rather less conspicuous than Google Glass. Not quite as good as connecting straight to the auditory cortex, but still. The first thing that comes to mind is trying to get GPS navigation to work on a System 1 rather than System 2 level, through subtle cues rather than interpreted speech.
[Edit: or positional cues rather than navigational. Not just knowing which way north is, but knowing which way home is.]
Stratton’s perceptual adaptation experiments a century ago have shown that the brain can adapt to different kinds of visual information, e.g. if you wear glasses that turn the picture upside down, you eventually adjust and start seeing it right side up again. And recently some people have been experimenting with augmented senses, like wearing an anklet with cell phone vibrators that lets you always know which way is north.
I wonder if we can combine these ideas? For example, if you always carry a wearable camera on your wrist and feed the information to a Google Glass-like display, will your brain eventually adapt to having effectively three eyes, one of which is movable? Will you gain better depth perception, a better sense of your surroundings, a better sense of what you look like, etc.?
One aspect of perceptual adaptation I do not often hear emphasized is the role of agency. I first encountered it in this passage:
--- Moshe Feldenkrais, “Man and World,” in “Explorers of Humankind,” ed Thomas Hanna
I read about an experiment (no link, sorry) where people wore helmets that gave them a 360 degree view of their surroundings. They were apparently able to adapt quite well, and could eventually do thinks like grab a ball tossed to them from behind without turning around.
from my experience with focusing on the senses I already have the mere availability of the data is not sufficient. You really need to process it. The glass intervention works well because it also takes away the primary way of interacting with the world. If you only add sense most of it can be pretty much ignored as it doesn’t bring any compelling extra value except for being cool for a while. Color TV was kinda nice improvement but not many are jumping on the 3D bandwagon.
So if you really want to go three eyed it could be a good bet it could be good from sense development perspective to go only new-eye mono for a while. Another one would be have a environment where the new capabilities are difference makingly handy. I could imagine that fixing and taking apart computers could benefit from that kind of sensing. You could also purposefully make a multilayered desk so that simply looking what is on the desk would require hand movement but many more documents could be open at any time.
Your brain is already mostly filtering out the massive amount of input it takes, making it quite expensive to make it bother paying attention to yet another sense-datum The sense would also require their own “drivers”. I could imagine that managing a moveable eye would be more laboursome than eye focus micro. Having a fixed separation of your viewpoints makes the calculations easy routine. That would have to be expanded into a more general approach for variable separation. There is a camera trick where you change your zoom while simultanously moving the camera in the forward backward dimension keeping the size of the primary target fixed but stretching the perspective. Big variance to the viewpoint separation would induce similar effects. I could imagine how it could be nausea-inducing instead of only cool. Increased mental labour and confusion atleast on the short-term would press against adopting a more expanded sensory experience. Therefore if such transition is wanted it is important to bring the tempting good sides concrete in the practical experience.
I’ve thought about taking this idea further.
Think of applying the anklet idea to groups of people. What if soccer teams could know where their teammates are at any time, even if they can’t see them? Now apply this to firemen. or infantry. This is the startup i’d be doing if I wasn’t doing what I’m doing. plugging data feeds right into the brain, and in particular doing this for groups of people, sounds like the next big frontier.
What other applications for groups of people can you imagine, apart from having a sense of each other’s position?
whatever team state matters. maybe online/offline, maybe emotional states, maybe doing biofeedback (hormones? alpha waves?) but cross-team. maybe just ‘how many production bugs we’ve had this week’.
but if we’re talking startups, I’d probably look at where the money is and go there. Can this be applied to groups of traders? c-level executives? medical teams? maybe some other target group are both flush with cash and early adopters of new tech?
Better: put it on your personal drone which normally orbits you but can be sent out to look at things...
I have magnets implanted into two of my fingertips, which extend my sense of touch to be able to feel electromagnetic fields. I did an AMA on reddit a while ago that answers most of the common questions, but I’d be happy to answer any others.
To touch on it briefly, alternating fields feel like buzzing, and static fields feel like bumps or divots in space. It’s definitely become a seamless part of my sensory experience, although most of the time I can’t tell it’s there because ambient fields are pretty weak.
There’s already some brain plasticity research which does this for people who have lost senses. Can’t remember a specific example, but I know there are quite a few in the book “The Brain That Changes Itself”
Well, technologies like BrainPort allow one to “see” with their tongue.
I would guess strongly (75%) that the answer is yes. There are incredible stories about people’s brains adapting to new inputs. There is one paper in the neuroscience literature that showed how if you connect a video input to a blind cat’s auditory cortex, that brain region will adapt new neural structures that are usually associated with vision (like edge detectors).
This makes me wonder what could be done with, say, a bluetooth earbud and a smartphone, both of which are rather less conspicuous than Google Glass. Not quite as good as connecting straight to the auditory cortex, but still. The first thing that comes to mind is trying to get GPS navigation to work on a System 1 rather than System 2 level, through subtle cues rather than interpreted speech.
[Edit: or positional cues rather than navigational. Not just knowing which way north is, but knowing which way home is.]