Assumption of positive rationality

Let’s pretend for the sake of simplicity that all belief-holding entities are either rational or irrational. Rational entities have beliefs that correlate well with reality, and update their beliefs with evidence properly. Irrational entities have beliefs that do not correlate with reality at all, and update their beliefs randomly. Now suppose Bob wants to know what the probability that he is rational is. He estimates that someone with a thought process that seems like his does from the inside is 70% likely to be rational and 30% likely to be irrational. Unfortunately, this does not help much. If Bob is irrational, then his estimate is useless. If Bob is rational, then, after updating on the fact that a randomly selected Bob-like entity is rational, the we can estimate that the probability of another randomly selected Bob-like entity being rational is higher than 70% (exact value depending on the uncertainty regarding what percentage of Bob-like entities are rational). But Bob doesn’t care whether a randomly selected Bob-like entity is rational; he wants to know whether he is rational. And conditional on Bob’s attempts to figure it out being effective, the probability of that is 1 by definition. Conditional on Bob being irrational, he cannot give meaningful estimates of the probability of much of anything. Thus, even if we ignore the difficulty of coming up with a prior, if Bob tries to evaluate evidence regarding whether or not he is rational, he ends up with:
P(evidence given Bob is rational) = x (he can figure it out)
P(evidence given Bob is irrational) = ?
I am not aware of any good ways to do Bayesian reasoning with question marks. It seems that Bob cannot meaningfully estimate the probability that he is rational. However, in a decision theoretic sense, this is not really an issue for him, because Bob cannot be an effective decision agent if his beliefs about how to achieve his objectives are uncorrelated with reality, so he has no expected utility invested in the possibility that he is irrational. All he needs are probabilities conditional on him being rational, and that’s what he has.

This does not seem to extend well to further increases in rationality. If you act on the assumption that you are immune to some common cognitive bias, you will just fail at life. However, I can think of one real-life application of this principle: the possibility that you are a Boltzmann brain. A Boltzmann brain would have no particular reason to have correct beliefs or good algorithms for evaluating evidence. When people talk about the probability that they are a Boltzmann brain, they often mention things like the fact that our sensory input is way more well-organized that it should be for almost all Boltzmann brains, but if you are a Boltzmann brain, then how are you supposed to know how well-organized your visual field should be? Is there any meaningful way someone can talk about the probability of em being a Boltzmann brain, or does ey just express all other probabilities as conditional on em not being a Boltzmann brain?