How does “not believe” translate into a probability assignment?
I don’t see that it has to. In particular, the theorems that say (roughly) “the right way to think about credence is in terms of probabilities with Bayesian updating” all assume that all your credences are represented by single real numbers; if there’s something necessarily irrational about simply declining to assign a probability to something then I don’t know what.
For instance: consider a statement that you simply don’t understand, and that for all you know might be either nonsense, or sophisticated truth, or sophisticated falsehood. Until you know at least something about what (if anything) it means, whether you assign a probability to it doesn’t make much difference: you can’t act on that probability assignment even once you’ve got it. (There are some possible exceptions; thinking of some is left as an exercise for the reader. I don’t think they make much difference to the overall point.)
For instance: consider a situation in which you (knowingly) lack much information relevant to deciding whether something is true, but you could get that information readily if you needed to. In that case, the right thing to do in most cases where the truth of the proposition matters is to get more information; a mental note saying “I haven’t assigned a probability to this yet” is not a bad way to handle that situation. (In order to be able to assign a probability after further research, perhaps there’d better be such a thing as “the probability you would have assigned if you’d thought about it”. But you needn’t have thought about it yet, and you needn’t have any probability assigned, but you can still say “I haven’t reached an opinion about this yet”.)
There’s a lot to be said for having, at least in principle, probability assignments for everything. It simplifies one’s decision theory, for instance. But I don’t see any compulsion.
I don’t see that it has to. In particular, the theorems that say (roughly) “the right way to think about credence is in terms of probabilities with Bayesian updating” all assume that all your credences are represented by single real numbers; if there’s something necessarily irrational about simply declining to assign a probability to something then I don’t know what.
For instance: consider a statement that you simply don’t understand, and that for all you know might be either nonsense, or sophisticated truth, or sophisticated falsehood. Until you know at least something about what (if anything) it means, whether you assign a probability to it doesn’t make much difference: you can’t act on that probability assignment even once you’ve got it. (There are some possible exceptions; thinking of some is left as an exercise for the reader. I don’t think they make much difference to the overall point.)
For instance: consider a situation in which you (knowingly) lack much information relevant to deciding whether something is true, but you could get that information readily if you needed to. In that case, the right thing to do in most cases where the truth of the proposition matters is to get more information; a mental note saying “I haven’t assigned a probability to this yet” is not a bad way to handle that situation. (In order to be able to assign a probability after further research, perhaps there’d better be such a thing as “the probability you would have assigned if you’d thought about it”. But you needn’t have thought about it yet, and you needn’t have any probability assigned, but you can still say “I haven’t reached an opinion about this yet”.)
There’s a lot to be said for having, at least in principle, probability assignments for everything. It simplifies one’s decision theory, for instance. But I don’t see any compulsion.