probabilities should correspond to expected observations and expected observations only
FWIW I think this is wrong. There’s a perfectly coherent framework—subjective expected utility theory (Jeffrey, Joyce, etc)—in which probabilities can correspond to many other things. Probabilities as credences can correspond to confidence in propositions unrelated to future observations, e.g., philosophical beliefs or practically-unobservable facts. You can unambiguously assign probabilities to ‘cosmopsychism’ and ‘Everett’s many-worlds interpretation’ without expecting to ever observe their truth or falsity.
However, there is another source of uncertainty: observational uncertainty. The other person might be uncertain whether they have all the facts that feed into their model, or whether their observations are correct.
This is reasonable. If a deterministic model has three free parameters, two of which you have specificied, you could just use your prior over the third parameter to create a distribution of model outcomes. This kind of situation should be pretty easy to clarify though, by saying something like “my model predicts event E iff parameter A is above A*” and “my prior P(A>A*) is 50% which implies P(E)=50%.”
But generically, the distribution is not coming from a model. It just looks like your all things considered credence that A>A*. I’d be hesitant calling a probability based on it your “inside view/model” probability.
These are great. Though Sleeping Mary can tell that she’s colourblind on any account of consciousness. Whether or not she learns a phenomenal fact when going from ‘colourblind scientist’ to ‘scientist who sees colour’, she does learn the propositional fact that she isn’t colourblind.
So, if she sees no colour, she ought to believe that the outcome of the coin toss is Tails. If she does see colour, both SSA and SIA say P(Heads)=1/2.