The problem with all this 3d headwear seems, to me at least, to be that they don’t really offer any substantial improvement over a monitor and mouse. Our brains don’t need stereoscopic displays to percieve a 3d world. Our brains are very very good at building up a 3d representation of a world from just a 2d image (something that comes in very handy in the real world when one of your eyes is closed or non-functioning). And moving around a 3d world with your hand seems to be about the same, if not easier, level of difficulty than moving around with your neck. And the main disadvantage with headwear is discomfort.
I’d give the Oculus Rift a 50% chance of success.
At least augmented reality headwear (such as google glass or the very amazing Meta Spaceglasses: https://www.spaceglasses.com/ ) have something to offer that can’t be had by a traditional monitor. However, it still remains to be seen how much people will actually desire those things. I can definitely imagine the Spaceglasses being widely used by creative professions.
Have you tried an Oculus Rift? I did, and I had the same “this is awesome!” reaction most people seem to report. Having more 3d space show up when you turn your head around is a big deal, as is having the 3d world take over your entire field of view.
There might be fatigue problems that show up with long-term use that we haven’t seen yet, and strange cultural reactions to the way the headset user becomes totally isolated from the surrounding real world, but the initial reaction where almost everyone who tries it on thinks it’s awesome predicts at least a fad success.
Initial reactions do not seem to be a good predictor of success. After the initial novelty wears off, users do in fact report problems such as low resolution and discomfort (dizziness, headaches, vertigo, and nausea). See for instance this review and many others that can be found with simple google search.
3d television users also initially reported “this is awesome!” reactions (for similar reasons), but it does not seem to have caught on (also for similar reasons: poor resolution and discomfort).
As mentioned in that review, it also depends on how the technology is used. Game developers have to take steps to reduce discomfort and to use the technology in novel ways. If that is done, then I agree that the chance of success becomes much larger.
After the initial novelty wears off, users do in fact report problems such as low resolution and discomfort (dizziness, headaches, vertigo, and nausea).
Most of which is due to limitations in the devkit model (lack of degrees of freedom in head tracking and low resolution), all of which are being fixed in the consumer model. Reviews of the consumer model prototypes tested at conventions / press events have reported these symptoms are gone.
When I made my prediction I called out a Snow Crash like metaverse as the killer app. More generally, I think we will be seeing applications of head-mounted VR that are surprising, novel, and ultimately far more interesting than gaming. Occulus Rift will be, I think, a transformative technology in general, even if it ends up controversial or marginalized in gaming.
Besides the metaverse I’ve already mentioned, here’s another one:
Through my work I’ve been fortunate enough to be able to use CAVE environments developed at UC Davis and UC San Diego in analysis of planetry data. Search for “3d CAVE” on youtube and you should fine plenty of videos showing what this experience is like.
The effect of being able to immersively interact with this data is incredible. The classic example I gave visitors was some of the first published data to come out of the UC Davis computer science / geology visualization partnership: a buckling of subduction zones that was previously unknown despite having sufficient data available for the last century at least. They loaded earthquake data overlayed on a globe basically as a test of the system, and almost immediately discovered the subduction buckling from straight visual inspection.
Analyzing geometric data directly in an immersive 3D environment is so much more productive than traditional techniques, because it exploits the natural machinery we have inbuilt in us for aggregating and extracting out details of sensory data. Already it sees use in many areas—I sat next to someone on a plane once who’s job it was to install these things in oil exploration ships, where the energy companies would use it to quickly analyze the terrabytes of data coming in from the sea bed.
I expect that in nearly all fields of engineering, physical science, and biology there are great efficiencies to be gained by utilizing the immersive CAVE experience. But a traditional CAVE will cost you half a million dollars, putting it way outside the reach of most organizations. An Occulus Rift + Kinect + decent graphics card puts you back less than a thousand dollars, on the other hand.
(BTW, experience in immersive CAVE environments is that with suitable precision and capability in the technology motion sickness-like symptoms disappear for all but a few percent of the population)
I actually agree with you here. As I mentioned in my first reply, I can easily imagine virtual/augmented reality headsets being used for creative professions, and I can also easily imagine them being used for science/engineering and so on. It’s just hard for me to imagine them being widely used in gaming, at least in their current form. Maybe future, more advanced iterations of the technology would have better chances.
What makes the Oculus Rift special in that regard? There have been numerous head-mounted VR solutions that have been able to do that for many years. Yet they have not seen any serious use for such purposes.
The Rift is different in that it provides full hemisphere viewing angle. There is no ‘tunnel vision’, and you get full peripheral vision. Peripheral vision is important to the HVS for motion sensation and situational awareness.
Its immediately different as soon as you turn your head, there is a definite wow factor over a monitor.
The tradeoff of course is the terrible resolution, but its interesting in showing the potential of at leas solving most of the other immersion problems.
1080p in each eye is hardly enough to ‘solve’ the resolution problem. There is a fundamental tradeoff between FOV and effective resolution—a reason why other manufactures haven’t attempted full human FOV. For a linear display its something like 8k x 4k per eye for a full FOV HMD to have HDTV equivalent resolution.
The big advantage over a monitor is immersion. When I tried out an oculus rift I felt like I was inside the virtual space in a way that I’ve never felt while playing FPSes on a monitor. That’s not a small thing.
Another advantage is that it increases how many input axes you have. Think of games where you’re flying a spaceship or driving a car and you can freely look in all directions while controlling your vehicle with both hands. That’s impossible on a standard monitor.
It’s not impossible. Games frequently allow you to use the arrow keys to move around while using the mouse to change the view direction (or vice versa).
I know that; I’ve played FPSes with that control layout for thousands of hours. I said “while controlling your vehicle with both hands” which means, for example, with a steering wheel, a throttle+joystick, or a keyboard+mouse with the mouse controlling something besides camera angle.
The problem with all this 3d headwear seems, to me at least, to be that they don’t really offer any substantial improvement over a monitor and mouse. Our brains don’t need stereoscopic displays to percieve a 3d world. Our brains are very very good at building up a 3d representation of a world from just a 2d image (something that comes in very handy in the real world when one of your eyes is closed or non-functioning). And moving around a 3d world with your hand seems to be about the same, if not easier, level of difficulty than moving around with your neck. And the main disadvantage with headwear is discomfort.
I’d give the Oculus Rift a 50% chance of success.
At least augmented reality headwear (such as google glass or the very amazing Meta Spaceglasses: https://www.spaceglasses.com/ ) have something to offer that can’t be had by a traditional monitor. However, it still remains to be seen how much people will actually desire those things. I can definitely imagine the Spaceglasses being widely used by creative professions.
EDIT: Changed ‘fatigue’ to ‘discomfort’.
Have you tried an Oculus Rift? I did, and I had the same “this is awesome!” reaction most people seem to report. Having more 3d space show up when you turn your head around is a big deal, as is having the 3d world take over your entire field of view.
There might be fatigue problems that show up with long-term use that we haven’t seen yet, and strange cultural reactions to the way the headset user becomes totally isolated from the surrounding real world, but the initial reaction where almost everyone who tries it on thinks it’s awesome predicts at least a fad success.
Initial reactions do not seem to be a good predictor of success. After the initial novelty wears off, users do in fact report problems such as low resolution and discomfort (dizziness, headaches, vertigo, and nausea). See for instance this review and many others that can be found with simple google search.
3d television users also initially reported “this is awesome!” reactions (for similar reasons), but it does not seem to have caught on (also for similar reasons: poor resolution and discomfort).
As mentioned in that review, it also depends on how the technology is used. Game developers have to take steps to reduce discomfort and to use the technology in novel ways. If that is done, then I agree that the chance of success becomes much larger.
Most of which is due to limitations in the devkit model (lack of degrees of freedom in head tracking and low resolution), all of which are being fixed in the consumer model. Reviews of the consumer model prototypes tested at conventions / press events have reported these symptoms are gone.
When I made my prediction I called out a Snow Crash like metaverse as the killer app. More generally, I think we will be seeing applications of head-mounted VR that are surprising, novel, and ultimately far more interesting than gaming. Occulus Rift will be, I think, a transformative technology in general, even if it ends up controversial or marginalized in gaming.
In that case, the chances of success look much better.
Can you give some examples?
Besides the metaverse I’ve already mentioned, here’s another one:
Through my work I’ve been fortunate enough to be able to use CAVE environments developed at UC Davis and UC San Diego in analysis of planetry data. Search for “3d CAVE” on youtube and you should fine plenty of videos showing what this experience is like.
The effect of being able to immersively interact with this data is incredible. The classic example I gave visitors was some of the first published data to come out of the UC Davis computer science / geology visualization partnership: a buckling of subduction zones that was previously unknown despite having sufficient data available for the last century at least. They loaded earthquake data overlayed on a globe basically as a test of the system, and almost immediately discovered the subduction buckling from straight visual inspection.
Analyzing geometric data directly in an immersive 3D environment is so much more productive than traditional techniques, because it exploits the natural machinery we have inbuilt in us for aggregating and extracting out details of sensory data. Already it sees use in many areas—I sat next to someone on a plane once who’s job it was to install these things in oil exploration ships, where the energy companies would use it to quickly analyze the terrabytes of data coming in from the sea bed.
I expect that in nearly all fields of engineering, physical science, and biology there are great efficiencies to be gained by utilizing the immersive CAVE experience. But a traditional CAVE will cost you half a million dollars, putting it way outside the reach of most organizations. An Occulus Rift + Kinect + decent graphics card puts you back less than a thousand dollars, on the other hand.
(BTW, experience in immersive CAVE environments is that with suitable precision and capability in the technology motion sickness-like symptoms disappear for all but a few percent of the population)
I actually agree with you here. As I mentioned in my first reply, I can easily imagine virtual/augmented reality headsets being used for creative professions, and I can also easily imagine them being used for science/engineering and so on. It’s just hard for me to imagine them being widely used in gaming, at least in their current form. Maybe future, more advanced iterations of the technology would have better chances.
How about an architect walking his clients though their soon-to-be house?
What makes the Oculus Rift special in that regard? There have been numerous head-mounted VR solutions that have been able to do that for many years. Yet they have not seen any serious use for such purposes.
Have you tried it?
The Rift is different in that it provides full hemisphere viewing angle. There is no ‘tunnel vision’, and you get full peripheral vision. Peripheral vision is important to the HVS for motion sensation and situational awareness.
Its immediately different as soon as you turn your head, there is a definite wow factor over a monitor.
The tradeoff of course is the terrible resolution, but its interesting in showing the potential of at leas solving most of the other immersion problems.
Solved in the consumer version which is still being worked on (at least 1080p in each eye).
1080p in each eye is hardly enough to ‘solve’ the resolution problem. There is a fundamental tradeoff between FOV and effective resolution—a reason why other manufactures haven’t attempted full human FOV. For a linear display its something like 8k x 4k per eye for a full FOV HMD to have HDTV equivalent resolution.
The big advantage over a monitor is immersion. When I tried out an oculus rift I felt like I was inside the virtual space in a way that I’ve never felt while playing FPSes on a monitor. That’s not a small thing.
Another advantage is that it increases how many input axes you have. Think of games where you’re flying a spaceship or driving a car and you can freely look in all directions while controlling your vehicle with both hands. That’s impossible on a standard monitor.
It’s not impossible. Games frequently allow you to use the arrow keys to move around while using the mouse to change the view direction (or vice versa).
I know that; I’ve played FPSes with that control layout for thousands of hours. I said “while controlling your vehicle with both hands” which means, for example, with a steering wheel, a throttle+joystick, or a keyboard+mouse with the mouse controlling something besides camera angle.