Specifically if you allow God to mean an agent that created the visible universe rather than a process, then I have no evidence for or against either hypothesis.
If you are given a hypothesis “X exists” and you have no evidence for that hypothesis, the rational conclusion is to not believe X exists (which is very different from believing “x does not exist”). The fact that you have no evidence against is not particularly relevant; there are an arbitrarily large (if not infinite) number of existential propositions for which you have no evidence against them.
More succinctly, if you have no evidence for or against a particular existential proposition, you are (or should be) an “atheist” with respect to that proposition.
If I’ve made a mistake in my reasoning/epistemology, please correct me. I’d like to make an actual independent post on the issue of not believing versus believing not, but I’m pretty sure I’m a karma point short.
If you are given a hypothesis “X exists” and you have no evidence for that hypothesis, the rational conclusion is to not believe X exists (which is very different from believing “x does not exist”).
How does “not believe” translate into a probability assignment?
Also, the prior is sometimes in favor of existence. There is, at least, a legitimate sense of “evidence” under which I have none for the existence of a person with the initials PQR, but I’m still extremely confident there is such a person.
Also also, precise existential statements must be over domains. The probability I assign to any particular possible finite structure existing in the universe must be at least the probability I assign to the universe being infinite, which is pretty high. Though, of course, I don’t have much reason to care whether Zeus exists 3^^^3 light-years away.
I’d like to make an actual independent post on the issue of not believing versus believing not
How does “not believe” translate into a probability assignment?
I don’t see that it has to. In particular, the theorems that say (roughly) “the right way to think about credence is in terms of probabilities with Bayesian updating” all assume that all your credences are represented by single real numbers; if there’s something necessarily irrational about simply declining to assign a probability to something then I don’t know what.
For instance: consider a statement that you simply don’t understand, and that for all you know might be either nonsense, or sophisticated truth, or sophisticated falsehood. Until you know at least something about what (if anything) it means, whether you assign a probability to it doesn’t make much difference: you can’t act on that probability assignment even once you’ve got it. (There are some possible exceptions; thinking of some is left as an exercise for the reader. I don’t think they make much difference to the overall point.)
For instance: consider a situation in which you (knowingly) lack much information relevant to deciding whether something is true, but you could get that information readily if you needed to. In that case, the right thing to do in most cases where the truth of the proposition matters is to get more information; a mental note saying “I haven’t assigned a probability to this yet” is not a bad way to handle that situation. (In order to be able to assign a probability after further research, perhaps there’d better be such a thing as “the probability you would have assigned if you’d thought about it”. But you needn’t have thought about it yet, and you needn’t have any probability assigned, but you can still say “I haven’t reached an opinion about this yet”.)
There’s a lot to be said for having, at least in principle, probability assignments for everything. It simplifies one’s decision theory, for instance. But I don’t see any compulsion.
In my experience with atheist communities, the difference between “do not believe X exists” and “believe X does not exist” seems to be roughly equivalent to P(“X exists”) = epsilon vs. P(“X exists”) = 0. I can’t speak for what Psychohistorian meant, though.
If you are given a hypothesis “X exists” and you have no evidence for that hypothesis, the rational conclusion is to not believe X exists (which is very different from believing “x does not exist”). The fact that you have no evidence against is not particularly relevant; there are an arbitrarily large (if not infinite) number of existential propositions for which you have no evidence against them.
More succinctly, if you have no evidence for or against a particular existential proposition, you are (or should be) an “atheist” with respect to that proposition.
If I’ve made a mistake in my reasoning/epistemology, please correct me. I’d like to make an actual independent post on the issue of not believing versus believing not, but I’m pretty sure I’m a karma point short.
How does “not believe” translate into a probability assignment?
Also, the prior is sometimes in favor of existence. There is, at least, a legitimate sense of “evidence” under which I have none for the existence of a person with the initials PQR, but I’m still extremely confident there is such a person.
Also also, precise existential statements must be over domains. The probability I assign to any particular possible finite structure existing in the universe must be at least the probability I assign to the universe being infinite, which is pretty high. Though, of course, I don’t have much reason to care whether Zeus exists 3^^^3 light-years away.
Please do!
I don’t see that it has to. In particular, the theorems that say (roughly) “the right way to think about credence is in terms of probabilities with Bayesian updating” all assume that all your credences are represented by single real numbers; if there’s something necessarily irrational about simply declining to assign a probability to something then I don’t know what.
For instance: consider a statement that you simply don’t understand, and that for all you know might be either nonsense, or sophisticated truth, or sophisticated falsehood. Until you know at least something about what (if anything) it means, whether you assign a probability to it doesn’t make much difference: you can’t act on that probability assignment even once you’ve got it. (There are some possible exceptions; thinking of some is left as an exercise for the reader. I don’t think they make much difference to the overall point.)
For instance: consider a situation in which you (knowingly) lack much information relevant to deciding whether something is true, but you could get that information readily if you needed to. In that case, the right thing to do in most cases where the truth of the proposition matters is to get more information; a mental note saying “I haven’t assigned a probability to this yet” is not a bad way to handle that situation. (In order to be able to assign a probability after further research, perhaps there’d better be such a thing as “the probability you would have assigned if you’d thought about it”. But you needn’t have thought about it yet, and you needn’t have any probability assigned, but you can still say “I haven’t reached an opinion about this yet”.)
There’s a lot to be said for having, at least in principle, probability assignments for everything. It simplifies one’s decision theory, for instance. But I don’t see any compulsion.
In my experience with atheist communities, the difference between “do not believe X exists” and “believe X does not exist” seems to be roughly equivalent to P(“X exists”) = epsilon vs. P(“X exists”) = 0. I can’t speak for what Psychohistorian meant, though.