While reading the OP and trying to match the ideas with my previous models/introspection, I was somewhat confused: on the one hand, the ideas seemed to usefully describe processes that seem familiar using a gears-level model , on the other hand I was unable to fit it with my previous models (I finally settled with sth along the lines of ‘this seems like an intriguing model of top/high-level coordination (=~conscious processes?) in the mind/brain, although it does not seem to address the structure that minds have?’)
[...] the purpose of CSHW is not to replace the massive information processing solved by neural networks.
Your comment really helped me put this into perspective
Are your previous models single or multi-agent? These ideas match multiagent models of the mind. If you start by assuming the mind to be a single agent then CSHW will not fit in with your previous models of the mind’s structure.
Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:
One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing: On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with ‘aesthetics’, on the other hand if in my mind I replace the term “high-energy-state” with “currently-active-goal-function(s)”, this becomes a shockingly strong model describing my introspective experiences (matching large parts of what I would usually think of roughly as ‘System 1-thinking’). Also the aspects of ‘dissonance’ and ‘consonance’ directly being unpleasant and pleasant feel more natural to me if I treat them as (possibly contradicting) goal functions, that also synchronize the perception-, memorizing-, modelling- and execution-parts of the mind. A highly consonant goal function will allow for vibrant and detailed states of mind.
Is there some mechanism that would allow for evolution to somewhat define the ‘landscape’ of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote
Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system.
---
Another aspect where my current model differs is that I do not identify consciousness (at least the part that creates the feeling of pleasure/suffering and the explicit feeling of ‘self’) as part of this goal-setting mechanism. In my model, the part of the mind that generates the feeling of pleasure or suffering is more of a local system (plus complications*) that takes the global state as model- and goal-input and tries to derive strategies from this. In my model, this part of the mind is what usually identifies as ‘self’ and it is this that is most relevant for depression or schizophrenia. But as what I describe as ‘model- and goal-input’ really defines the world and goals that the ‘self’ sees and pursues at each moment (sudden changes can be very disconcerting experiences), the implications of annealing for health would stay similar.
---
After writing all of this I can finally address the question of the parent comment:
Are your previous models single or multi-agent?
I very much like the multiagent-model sequence although I am not sure how well my “Another aspect [...]”-description matches: On the one hand, my model does have a privileged ‘self’-system that is much less fragmented than the goal-function-landscape. On the other hand, the goal-function-landscape seems best described by “shards of desire” (which is a formulation used in the sequences if I remember correctly) and they can direct and override the self easily. This part fits well with the multiagent-model
---
*) A complication is that the ‘self’ can also endorse/reject goals and redirect ‘active goal-energy’ (it feels like a kind of delegable voting power that the self as strategy-expert can use if it gained the trust and thus voting-power of goal-setting parts) onto the goal-setting parts themselves in order to shape them.
This will be a terribly late and very incomplete reply, but regarding your question,
>Is there some mechanism that would allow for evolution to somewhat define the ‘landscape’ of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote >>Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system.
A metaphor that I like to use here is that I see any given brain as a terribly complicated lock. Various stimuli can be thought of as keys. The right key will create harmony in the brain’s harmonics. E.g., if you’re hungry, a nice high-calorie food will create a blast of consonance which will ripple through many different brain systems, updating your tacit drive away from food seeking. If you aren’t hungry—it won’t create this blast of consonance. It’s the wrong key to unlock harmony in your brain.
Under this model, the shape of the connectome is the thing that evolution has built to define the landscape of harmonics and drive adaptive behavior. The success condition is harmony. I.e., the lock is very complex, the ‘key’ that fits a given lock can be either simple or complex, and the success condition (harmony in the brain) is relatively simple.
Thank you for this explanation.
While reading the OP and trying to match the ideas with my previous models/introspection, I was somewhat confused: on the one hand, the ideas seemed to usefully describe processes that seem familiar using a gears-level model , on the other hand I was unable to fit it with my previous models (I finally settled with sth along the lines of ‘this seems like an intriguing model of top/high-level coordination (=~conscious processes?) in the mind/brain, although it does not seem to address the structure that minds have?’)
Your comment really helped me put this into perspective
Are your previous models single or multi-agent? These ideas match multiagent models of the mind. If you start by assuming the mind to be a single agent then CSHW will not fit in with your previous models of the mind’s structure.
Now reading the post for the second time, I again find it fascinating – and I think I can pinpoint my confusion more clearly now:
One aspect that sparks confusion when matched against my (mostly introspection + lesswrong-reading generated) model, is the directedness of annealing:
On the one hand, I do not see how the mechanism of free energy creates such a strong directedness as the OP describes with ‘aesthetics’,
on the other hand if in my mind I replace the term “high-energy-state” with “currently-active-goal-function(s)”, this becomes a shockingly strong model describing my introspective experiences (matching large parts of what I would usually think of roughly as ‘System 1-thinking’). Also the aspects of ‘dissonance’ and ‘consonance’ directly being unpleasant and pleasant feel more natural to me if I treat them as (possibly contradicting) goal functions, that also synchronize the perception-, memorizing-, modelling- and execution-parts of the mind. A highly consonant goal function will allow for vibrant and detailed states of mind.
Is there some mechanism that would allow for evolution to somewhat define the ‘landscape’ of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote
---
Another aspect where my current model differs is that I do not identify consciousness (at least the part that creates the feeling of pleasure/suffering and the explicit feeling of ‘self’) as part of this goal-setting mechanism. In my model, the part of the mind that generates the feeling of pleasure or suffering is more of a local system (plus complications*) that takes the global state as model- and goal-input and tries to derive strategies from this. In my model, this part of the mind is what usually identifies as ‘self’ and it is this that is most relevant for depression or schizophrenia. But as what I describe as ‘model- and goal-input’ really defines the world and goals that the ‘self’ sees and pursues at each moment (sudden changes can be very disconcerting experiences), the implications of annealing for health would stay similar.
---
After writing all of this I can finally address the question of the parent comment:
I very much like the multiagent-model sequence although I am not sure how well my “Another aspect [...]”-description matches: On the one hand, my model does have a privileged ‘self’-system that is much less fragmented than the goal-function-landscape. On the other hand, the goal-function-landscape seems best described by “shards of desire” (which is a formulation used in the sequences if I remember correctly) and they can direct and override the self easily. This part fits well with the multiagent-model
---
*) A complication is that the ‘self’ can also endorse/reject goals and redirect ‘active goal-energy’ (it feels like a kind of delegable voting power that the self as strategy-expert can use if it gained the trust and thus voting-power of goal-setting parts) onto the goal-setting parts themselves in order to shape them.
This will be a terribly late and very incomplete reply, but regarding your question,
>Is there some mechanism that would allow for evolution to somewhat define the ‘landscape’ of harmonics? Is reframing the harmonics as goals compatible with the model? Something like this seems to be pointed at in the quote
>>Panksepp’s seven core drives (play, panic/grief, fear, rage, seeking, lust, care) might be a decent first-pass approximation for the attractors in this system.
A metaphor that I like to use here is that I see any given brain as a terribly complicated lock. Various stimuli can be thought of as keys. The right key will create harmony in the brain’s harmonics. E.g., if you’re hungry, a nice high-calorie food will create a blast of consonance which will ripple through many different brain systems, updating your tacit drive away from food seeking. If you aren’t hungry—it won’t create this blast of consonance. It’s the wrong key to unlock harmony in your brain.
Under this model, the shape of the connectome is the thing that evolution has built to define the landscape of harmonics and drive adaptive behavior. The success condition is harmony. I.e., the lock is very complex, the ‘key’ that fits a given lock can be either simple or complex, and the success condition (harmony in the brain) is relatively simple.