General comment (which has shown up many times in the comments on this issue): taboo “mind”, and this conversation seems clearer. It’s obvious that not all physical processes are altered by logical arguments, and any ‘mind’ is going to be implemented as a physical process in a reductionist universe.
Specific comment: This old comment by PhilGoetz seems relevant, and seems similar to contemporary comments by TheAncientGeek. If you view ‘mind’ as a subset of ‘optimization process’, in that they try to squeeze the future into a particular region, then there are minds that are objectively better and worse at squeezing the future into the regions they want. And, in particular, there are optimization processes that persist shorter or longer than others, and if we exclude from our consideration short-lived or ineffective processes, then they are likely to buy conclusions we consider ‘objective,’ and it can be interesting to see what axioms or thought processes lead to which sorts of conclusions.
But it’s not clear to me that they buy anything like the processes we use to decide which conclusions are ‘objectively correct conclusions’.
Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds? We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
EDIT: More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all. I think there’s a great confusion between “mind” and “intelligence” here.
Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds?
Basically, I’m making the claim that it could be reasonable to see “optimization” as a precondition to consider something a ‘mind’ rather than a ‘not-mind,’ but not the only one, or it wouldn’t be a subset. And here, really, what I mean is something like a closed control loop- it has inputs, it processes them, it has outputs dependent on the processed inputs, and when in a real environment it compresses the volume of potential future outcomes into a smaller, hopefully systematically different, volume.
We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
Right, but “X is a subset of Y” in no way implies “any Y is an X.”
More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all.
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I think that for an arbitrary better, rather than a subjective better, this statement becomes tautological. You simply find the futures created by the system we’re calling a “mind” and declare them High Utility Futures simply by virtue of the fact that the system brought them about.
(And admittedly, humans have been using cui bono conspiracy-reasoning without actually considering what other people really value for thousands of years now.)
If we want to speak non-tautologically, then I maintain my objection that very little in psychology or subjective experience indicates a belief that the mind as such or as a whole has an optimization function, rather than intelligence having an optimization function as a particularly high-level adaptation that steps in when my other available adaptations prove insufficient for execution in a given context.
General comment (which has shown up many times in the comments on this issue): taboo “mind”, and this conversation seems clearer. It’s obvious that not all physical processes are altered by logical arguments, and any ‘mind’ is going to be implemented as a physical process in a reductionist universe.
Who said otherwise?
This old comment by PhilGoetz seems relevant
Thanks for that. I could add that self-improvement places further constraints.
General comment (which has shown up many times in the comments on this issue): taboo “mind”, and this conversation seems clearer. It’s obvious that not all physical processes are altered by logical arguments, and any ‘mind’ is going to be implemented as a physical process in a reductionist universe.
Specific comment: This old comment by PhilGoetz seems relevant, and seems similar to contemporary comments by TheAncientGeek. If you view ‘mind’ as a subset of ‘optimization process’, in that they try to squeeze the future into a particular region, then there are minds that are objectively better and worse at squeezing the future into the regions they want. And, in particular, there are optimization processes that persist shorter or longer than others, and if we exclude from our consideration short-lived or ineffective processes, then they are likely to buy conclusions we consider ‘objective,’ and it can be interesting to see what axioms or thought processes lead to which sorts of conclusions.
But it’s not clear to me that they buy anything like the processes we use to decide which conclusions are ‘objectively correct conclusions’.
Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds? We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
EDIT: More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all. I think there’s a great confusion between “mind” and “intelligence” here.
Basically, I’m making the claim that it could be reasonable to see “optimization” as a precondition to consider something a ‘mind’ rather than a ‘not-mind,’ but not the only one, or it wouldn’t be a subset. And here, really, what I mean is something like a closed control loop- it has inputs, it processes them, it has outputs dependent on the processed inputs, and when in a real environment it compresses the volume of potential future outcomes into a smaller, hopefully systematically different, volume.
Right, but “X is a subset of Y” in no way implies “any Y is an X.”
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I think that for an arbitrary better, rather than a subjective better, this statement becomes tautological. You simply find the futures created by the system we’re calling a “mind” and declare them High Utility Futures simply by virtue of the fact that the system brought them about.
(And admittedly, humans have been using cui bono conspiracy-reasoning without actually considering what other people really value for thousands of years now.)
If we want to speak non-tautologically, then I maintain my objection that very little in psychology or subjective experience indicates a belief that the mind as such or as a whole has an optimization function, rather than intelligence having an optimization function as a particularly high-level adaptation that steps in when my other available adaptations prove insufficient for execution in a given context.
Who said otherwise?
Thanks for that. I could add that self-improvement places further constraints.