I reject that there is any such “base ground” from which to define things. An agent has to start with itself as it understands itself. My own talk of agents is grounded in my own subjective experience and sense of meaning ultimately. Even if there was some completely objective one I would still have to start from this place in order to evaluate and accept it.
In practice it all ends up pretty normal. Everyone agrees on what is real for basically the same reason that any bounded agent has to agree on the temperature, even though it’s technically subjective. The k-complexity priors are very constraining.
Well that seems like a good starting point. I guess then, some of the arguments could be subjectivized at the level of, among agents who believe they exist in reality, what possible hypotheses could they have about their mental states and reality and how they relate; is there something like a “disjunction over plausible alternatives” (which would include something like f/g), natural supervenience, etc. Then with k-complexity epistemology it’s possible to ask, what sort of reality theory will that tend to produce, e.g. what would a k-complexity epistemology think about homomorphic encryption, in the case of other agents or itself? One thing I am suggesting is that computation bounded k-complexity type reasoning (speed prior etc) will tend to believe reality contains more information than micro-scale physics, as such information would otherwise be intractable (would be penalized by speed prior). Or put another way, physicalist reductionism “works” for computation-unbounded agents (given supervenience, information about microstates exhausts information about macrostates), but not computation-bounded agents (the derivation of macrostates from microstates is sometimes computationally intractable; this is extra relevant when such macrostates are observable, e.g. in the case of the homomorphically encrypted agent observing its own beliefs).
Sure.
I reject that there is any such “base ground” from which to define things. An agent has to start with itself as it understands itself. My own talk of agents is grounded in my own subjective experience and sense of meaning ultimately. Even if there was some completely objective one I would still have to start from this place in order to evaluate and accept it.
In practice it all ends up pretty normal. Everyone agrees on what is real for basically the same reason that any bounded agent has to agree on the temperature, even though it’s technically subjective. The k-complexity priors are very constraining.
Well that seems like a good starting point. I guess then, some of the arguments could be subjectivized at the level of, among agents who believe they exist in reality, what possible hypotheses could they have about their mental states and reality and how they relate; is there something like a “disjunction over plausible alternatives” (which would include something like f/g), natural supervenience, etc. Then with k-complexity epistemology it’s possible to ask, what sort of reality theory will that tend to produce, e.g. what would a k-complexity epistemology think about homomorphic encryption, in the case of other agents or itself? One thing I am suggesting is that computation bounded k-complexity type reasoning (speed prior etc) will tend to believe reality contains more information than micro-scale physics, as such information would otherwise be intractable (would be penalized by speed prior). Or put another way, physicalist reductionism “works” for computation-unbounded agents (given supervenience, information about microstates exhausts information about macrostates), but not computation-bounded agents (the derivation of macrostates from microstates is sometimes computationally intractable; this is extra relevant when such macrostates are observable, e.g. in the case of the homomorphically encrypted agent observing its own beliefs).