You have officially blown my mind. I seriously cannot believe that AI can subtly mess with video color in real time based on known effect on eye movement, that is absolutely nuts and the applications are limitless in the short- and long-term. No wonder Apple stock keeps going up, there’s probably all sorts of things like that which I’m not aware of.
This isn’t an AI thing at all, it’s an optics thing.
One of the core problems of VR optics is that the panels emit three different wavelengths of light (R, G and B), and these bend differently when they pass through the lenses. If you naively display an image without correcting this, you wind up with red, green, and blue partial images that have separated from each other. In order to fix this problem, you predict the lens effect, apply the opposite effect to the drawn image, and have the distortions cancel. The problem is that the headset’s position on your face is imprecise, and if you shift the headset a millimeter in any direction, the R, G and B images (as perceived by the eye) move in different directions. If you’re trying to display black-on-white or white-on-black text, moving the color channels a pixel apart has a major effect on readability.
Before I read this, I thought I was cool for knowing that oscillating adaptive refresh rate could yield known measurable effects e.g. while shopping online. That’s nothing compared to what chromatic aberration can do. Thank you very much for sharing, my career has benefited profoundly by me learning this.
This paragraph is profoundly confused in a way that I can’t fathom.
foveated rendering (uncertain value, but in the best case might effectively quadruple your GPU speed)
I’m definitely not an expert in this area, but I can’t imagine this being possible unless the headset was hardwired to a data center or something. Have we really gotten to the point where that much ML can fit on a gaming PC?
Again, nothing to do with ML. Foveated rendering is a fancy way of saying “don’t spend GPU cycles drawing parts of the screen that the user isn’t looking at”. It only works if you have an eye-tracking camera that tells you which part of the screen the user is looking at.
This isn’t an AI thing at all, it’s an optics thing.
One of the core problems of VR optics is that the panels emit three different wavelengths of light (R, G and B), and these bend differently when they pass through the lenses. If you naively display an image without correcting this, you wind up with red, green, and blue partial images that have separated from each other. In order to fix this problem, you predict the lens effect, apply the opposite effect to the drawn image, and have the distortions cancel. The problem is that the headset’s position on your face is imprecise, and if you shift the headset a millimeter in any direction, the R, G and B images (as perceived by the eye) move in different directions. If you’re trying to display black-on-white or white-on-black text, moving the color channels a pixel apart has a major effect on readability.
This paragraph is profoundly confused in a way that I can’t fathom.
Again, nothing to do with ML. Foveated rendering is a fancy way of saying “don’t spend GPU cycles drawing parts of the screen that the user isn’t looking at”. It only works if you have an eye-tracking camera that tells you which part of the screen the user is looking at.