To be clear I am mainly talking about doxastic states, it’s just that much of the past discussion and accordingly intuitions and terminology is based on “consciousness”.
Step 4 is assuming that there are real f/g/h, which need not be known. I get that this might not be valid if there is fundamental indeterminacy. However even in that case the indeterminacy might decompose into a disjunction over some equivalence class of f/g/h triples?
For particular f/g it seems for natural supervenience to not hold would require extra-physical information, “antennae theory” or something. In the Chalmers sense I mean f/g to determine psycho-physical bridging laws which are sufficient for natural supervenience, so there is no extra “soul goo”. So that the possible indeterminacy of the computational interpretation is fixed by deciding f/g.
I think basically g/h are part of an agent’s anthropic priors. It builds a model of reality and of its state of mind, and has a distribution over ways to bridge these. I don’t know what it would mean for there to be canonical such functions even in principle.
g/h can be posited by an agent e.g. Solomonoff induction.
But also, if you’re talking about agents in the first place as meaningful things, then it seems something like “doxastic mental states” is already reified, in which case you can ask things like “do these supervene on the same reality physics does”… It doesn’t really work to explain doxastic states in terms of other doxastic states in an infinite regress.
I reject that there is any such “base ground” from which to define things. An agent has to start with itself as it understands itself. My own talk of agents is grounded in my own subjective experience and sense of meaning ultimately. Even if there was some completely objective one I would still have to start from this place in order to evaluate and accept it.
In practice it all ends up pretty normal. Everyone agrees on what is real for basically the same reason that any bounded agent has to agree on the temperature, even though it’s technically subjective. The k-complexity priors are very constraining.
Well that seems like a good starting point. I guess then, some of the arguments could be subjectivized at the level of, among agents who believe they exist in reality, what possible hypotheses could they have about their mental states and reality and how they relate; is there something like a “disjunction over plausible alternatives” (which would include something like f/g), natural supervenience, etc. Then with k-complexity epistemology it’s possible to ask, what sort of reality theory will that tend to produce, e.g. what would a k-complexity epistemology think about homomorphic encryption, in the case of other agents or itself? One thing I am suggesting is that computation bounded k-complexity type reasoning (speed prior etc) will tend to believe reality contains more information than micro-scale physics, as such information would otherwise be intractable (would be penalized by speed prior). Or put another way, physicalist reductionism “works” for computation-unbounded agents (given supervenience, information about microstates exhausts information about macrostates), but not computation-bounded agents (the derivation of macrostates from microstates is sometimes computationally intractable; this is extra relevant when such macrostates are observable, e.g. in the case of the homomorphically encrypted agent observing its own beliefs).
To be clear I am mainly talking about doxastic states, it’s just that much of the past discussion and accordingly intuitions and terminology is based on “consciousness”.
Step 4 is assuming that there are real f/g/h, which need not be known. I get that this might not be valid if there is fundamental indeterminacy. However even in that case the indeterminacy might decompose into a disjunction over some equivalence class of f/g/h triples?
For particular f/g it seems for natural supervenience to not hold would require extra-physical information, “antennae theory” or something. In the Chalmers sense I mean f/g to determine psycho-physical bridging laws which are sufficient for natural supervenience, so there is no extra “soul goo”. So that the possible indeterminacy of the computational interpretation is fixed by deciding f/g.
I think basically g/h are part of an agent’s anthropic priors. It builds a model of reality and of its state of mind, and has a distribution over ways to bridge these. I don’t know what it would mean for there to be canonical such functions even in principle.
g/h can be posited by an agent e.g. Solomonoff induction.
But also, if you’re talking about agents in the first place as meaningful things, then it seems something like “doxastic mental states” is already reified, in which case you can ask things like “do these supervene on the same reality physics does”… It doesn’t really work to explain doxastic states in terms of other doxastic states in an infinite regress.
Sure.
I reject that there is any such “base ground” from which to define things. An agent has to start with itself as it understands itself. My own talk of agents is grounded in my own subjective experience and sense of meaning ultimately. Even if there was some completely objective one I would still have to start from this place in order to evaluate and accept it.
In practice it all ends up pretty normal. Everyone agrees on what is real for basically the same reason that any bounded agent has to agree on the temperature, even though it’s technically subjective. The k-complexity priors are very constraining.
Well that seems like a good starting point. I guess then, some of the arguments could be subjectivized at the level of, among agents who believe they exist in reality, what possible hypotheses could they have about their mental states and reality and how they relate; is there something like a “disjunction over plausible alternatives” (which would include something like f/g), natural supervenience, etc. Then with k-complexity epistemology it’s possible to ask, what sort of reality theory will that tend to produce, e.g. what would a k-complexity epistemology think about homomorphic encryption, in the case of other agents or itself? One thing I am suggesting is that computation bounded k-complexity type reasoning (speed prior etc) will tend to believe reality contains more information than micro-scale physics, as such information would otherwise be intractable (would be penalized by speed prior). Or put another way, physicalist reductionism “works” for computation-unbounded agents (given supervenience, information about microstates exhausts information about macrostates), but not computation-bounded agents (the derivation of macrostates from microstates is sometimes computationally intractable; this is extra relevant when such macrostates are observable, e.g. in the case of the homomorphically encrypted agent observing its own beliefs).