eliminating those possibilities inconsistent with your observations
There’s the (/a) rub. When is a hypothesis inconsistent with observations? More generally, what probabilities does a hypothesis assign to observations? If we want our world models to really capture the universe, including a fine-grained self-understanding, they will not look like predicted sequences of observations, which are already high-level phenomena within an “observer”. They should be more reductionist, i.e. true to the actual structure of the universe. But then, how do you know when a (hypothetical) universe predicts that you see red vs. green? This “self-location” or “bridging hypothesis” is the whole problem.
There’s the (/a) rub. When is a hypothesis inconsistent with observations? More generally, what probabilities does a hypothesis assign to observations? If we want our world models to really capture the universe, including a fine-grained self-understanding, they will not look like predicted sequences of observations, which are already high-level phenomena within an “observer”. They should be more reductionist, i.e. true to the actual structure of the universe. But then, how do you know when a (hypothetical) universe predicts that you see red vs. green? This “self-location” or “bridging hypothesis” is the whole problem.
Hypothesis don’t assign probabilities in this model, they only make absolute predictions.
I wouldn’t call it the “whole problem”, but yeah, bridging is not handled by this model, and is currently an open problem AFAIK.