It seems to me that person could have quite a high level of integration between their goals, but at the same time could experience quite low meaning in their life.
Hmm, yeah I think you have convinced me the current frame is insufficient.
Some further musings… (epistemic status: who knows?)
Seems like there’s at least a few things going on
alignment-of-purposes, and a sense of “I’m doing the thing I’m supposed to be doing.”
“the thing I’m doing here matters, somehow.”
“I feel vibrant / excited about the things I’m doing.”
Number 2 I am perhaps most confused about. Will come back to that in a sec.
Number 3 seems to decompose into “why would you build a robot that had vibrance/excitement, or emotions in general.” I don’t think I can give a technical answer here that I clearly understand, but I have a vague fuzzy model of “emotions are what feedback loops feel like from the inside when the feedback loops are constructed some-particular-way.” I don’t know what-particular-way the feedback loops need to be constructed as to generate the internal feeling of vibrance/excitement, but… I feel sort of okay about that level of mysteriousness. It feels like a blank spot in my map, but not a confusing blank spot in my map.
I suspect if we built a robot on purpose, we’d ideally want to do it without the particular kind of feedback-loops/emotions that humans have. But, if I’m dumb ol’ evolution building robots however I can without the ability to think more than one-generation-ahead… I can imagine building some things with emotions, one of which is some kind of vibrance, excitement, enthusiasm, etc. And then when that organism ends up having to build high level strategic planning in confusing domains, the architecture for those emotions-and-corresponding-qualia ends up being one of the building blocks that the meaningmaking process gets constructed out of.
...
returning to #2:
So one thing that comes up in the OP is that humans don’t just have to fill in an ontology beneath a clear-cut goal. They also have multiple goals, and have to navigate between them. As they fill in their ontology that connects their various goals, they have to guess at how to construct the high level goals that subgoals nest under.
StarCraftBot has to check “does this matter, or not?” for various actions like “plan an attack, establish a new base, etc.” But it has a clear ultimate goal that unambiguously matters a particular way, which it probably wouldn’t be necessary to have complex emotions about.
But for us, “what is the higher level goal? Do we have a thing that matters or not?” is something we’re more fundamentally confused about, and having a barometer for “have we figured out if we’re doing things that matter” is more actually useful.
Hmm, yeah I think you have convinced me the current frame is insufficient.
Some further musings… (epistemic status: who knows?)
Seems like there’s at least a few things going on
alignment-of-purposes, and a sense of “I’m doing the thing I’m supposed to be doing.”
“the thing I’m doing here matters, somehow.”
“I feel vibrant / excited about the things I’m doing.”
Number 2 I am perhaps most confused about. Will come back to that in a sec.
Number 3 seems to decompose into “why would you build a robot that had vibrance/excitement, or emotions in general.” I don’t think I can give a technical answer here that I clearly understand, but I have a vague fuzzy model of “emotions are what feedback loops feel like from the inside when the feedback loops are constructed some-particular-way.” I don’t know what-particular-way the feedback loops need to be constructed as to generate the internal feeling of vibrance/excitement, but… I feel sort of okay about that level of mysteriousness. It feels like a blank spot in my map, but not a confusing blank spot in my map.
I suspect if we built a robot on purpose, we’d ideally want to do it without the particular kind of feedback-loops/emotions that humans have. But, if I’m dumb ol’ evolution building robots however I can without the ability to think more than one-generation-ahead… I can imagine building some things with emotions, one of which is some kind of vibrance, excitement, enthusiasm, etc. And then when that organism ends up having to build high level strategic planning in confusing domains, the architecture for those emotions-and-corresponding-qualia ends up being one of the building blocks that the meaningmaking process gets constructed out of.
...
returning to #2:
So one thing that comes up in the OP is that humans don’t just have to fill in an ontology beneath a clear-cut goal. They also have multiple goals, and have to navigate between them. As they fill in their ontology that connects their various goals, they have to guess at how to construct the high level goals that subgoals nest under.
StarCraftBot has to check “does this matter, or not?” for various actions like “plan an attack, establish a new base, etc.” But it has a clear ultimate goal that unambiguously matters a particular way, which it probably wouldn’t be necessary to have complex emotions about.
But for us, “what is the higher level goal? Do we have a thing that matters or not?” is something we’re more fundamentally confused about, and having a barometer for “have we figured out if we’re doing things that matter” is more actually useful.
Maybe. idk.