After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology.
In the logcial beginning, I know nothing about the territory. However, I notice that I have ‘experiences.’ However, I have nore ason for believing that these experiences are ‘real’ in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. ‘The sun rises every morning,’ in this view, is actually ‘it will seem to me that every time there’s this morning-thing I’ll see the sun rise.’ All hypotheses (liike maya and boltzmann brains) that say that these experiences are not ‘real,’ as long as I have no reason to doubt ‘reality,’ form part of this inscrutable probability noise in my probability assignments. Therefore, even if I was randomed into existence a second ago, it’s still rational to do everything and say “I have no issues with being a boltzmann brain—however it’s just part of my probability noise.′
I haven’t fleshed out precisely the connection between this reasoning and not worrying about Carroll’s argument—it seems as if I’m viewing myself as an implementation-independent process trying to reason about its implementation, and asking what reasoning holds up in that view.
I think this is a good argument. Thanks.
After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology.
In the logcial beginning, I know nothing about the territory. However, I notice that I have ‘experiences.’ However, I have nore ason for believing that these experiences are ‘real’ in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. ‘The sun rises every morning,’ in this view, is actually ‘it will seem to me that every time there’s this morning-thing I’ll see the sun rise.’ All hypotheses (liike maya and boltzmann brains) that say that these experiences are not ‘real,’ as long as I have no reason to doubt ‘reality,’ form part of this inscrutable probability noise in my probability assignments. Therefore, even if I was randomed into existence a second ago, it’s still rational to do everything and say “I have no issues with being a boltzmann brain—however it’s just part of my probability noise.′
I haven’t fleshed out precisely the connection between this reasoning and not worrying about Carroll’s argument—it seems as if I’m viewing myself as an implementation-independent process trying to reason about its implementation, and asking what reasoning holds up in that view.