Another big gap is explaining how and when U directly updates on C’s information. For example, it requires conscious reasoning and language processing to understand that a man on a plane holding a device with a countdown timer and shouting political and religious slogans is a threat, but a person on that plane would experience fear, increased sympathetic activation, and other effects mediated by the unconscious mind.
That’s not a gap at all—in fact, you answered it elsewhere in your article, right here:
U seems to reason over neural inputs – it takes in things like sense perceptions
The key is understanding that those sense perceptions need not be present-tense/actual; they can be remembered or imagined. It’s pretty central to what I do. (Heck, most of the model you’re describing isn’t much different from things I’ve been writing about since around 2005.)
Anyway, a big part of the work I do with people is helping them learn to identify the remembered or imagined sensory predictions (which drive the feelings and behavior) and inject other ways of looking at things. More precisely, other ways of interpreting the sensory impressions, such that they lead to different predictions about what will happen.
Don’t get me wrong, there is a ton of pragmatic information you need to know in order to be able to do that effectively, so it’s really only that simple in principle.
Chief amongst the problems most people encounter is that C usually pays near-zero attention to what U is doing, and is exceptionally prone to making up propositions that “explain” U’s motives incorrectly.
(The other fairly big problem is that C is pretty bad at coming up with alternative perspectives or interpretations—we’re pretty good at doing that to other people’s ideas, but not our own.)
One thing that I sort of disagree with in your article, though, is that you can’t really view U as a reasoning “agent”. Remember: it isn’t conscious (not an agent), and it’s not singular (not an agent).
Unlike C, U can hold mutually-contradictory concepts—i.e., “double-binds”. It is also concerned only with the regulation of the expected future value of perceptually-derived measurements (which includes things like status, the state of one’s relationships, health, etc. -- really anything a human can value.)
U can and does engage in “unsupervised learning” to find ways to regulate these values, but it is limited to relatively blind variations, rather than devious plots. (But then, evolutionary searches can certainly lead to things that look like devious plots on occasion!)
However, it’s sheer anthropomorphizing to think of U as if it were an actual agent, as opposed to simply treating agency as a metaphor. To think of it as an agent tends to lead to the idea of conflict, and it also implies that if you attempt to change it, it will somehow fight back or push against being changed.
And indeed, it can sometimes appear that way, if you don’t realize that the agglomeration of regulated values we are calling “U” is not under any requirement to be self-consistent, except as a result of sensory perceptions (whether real, remembered, or imagined) occurring in close temporal bounds.
That is, you realized your issue with crossing the street was silly because you actually paid attention to it—the juxtaposition of sensory information causing U to update its model.
To put it another way, your “realizing it was silly” was how the algorithm of U updating on C feels from the inside.
Essentially (at a bit of an oversimplification) my work consists of teaching people to find what things to feel silly about, and to identify ways to think about them so that they do, in fact, feel silly about them. (It is, in fact a remarkably common response to making any sort of “U” change: the “Wait? You mean I don’t have to do it that way?” response.)
I generally agree with PJ, but not in this case. I don’t think that C exists, just U and sensory modalities that some parts of U can manipulate, and I think that U contains systems which are not actual agents and other systems that, while not conscious, ARE actual agents in the same sense that animals with a visual cortex are, others which are agents in the sense that animals that lack a visual cortex are, and other simple agents which can manipulate the sensory modalities in a stereotyped fashion (or censor the data coming into the sensory modalities) but which don’t seek goals (or maybe, not outward directed goals).
I see my symbolic centers as essentially a forum within which mutually beneficial sometimes timeless trades are negotiated between agents, some of which run on the same brain and some of which run in parallel in multiple brains. It looks more appealing, from the outside, than what I used to do, since it believes that it should. From inside, its nice because I’m getting all these gains from trade.
Unlike C, U can hold mutually-contradictory concepts
C can hold mutually-contradictory concepts. C is very good at holding mutually-contradictory concepts. Perhaps even better than U (in as much as U is more likely to give things weights while C often thinks in absolutes.)
hmm, as I understood from the post Both C & U give due consideration in their own ways. this is too tangled for me though, could you elaborate on how C’s absolutes are so different from U’s stimuli->reaction?
That’s not a gap at all—in fact, you answered it elsewhere in your article, right here:
The key is understanding that those sense perceptions need not be present-tense/actual; they can be remembered or imagined. It’s pretty central to what I do. (Heck, most of the model you’re describing isn’t much different from things I’ve been writing about since around 2005.)
Anyway, a big part of the work I do with people is helping them learn to identify the remembered or imagined sensory predictions (which drive the feelings and behavior) and inject other ways of looking at things. More precisely, other ways of interpreting the sensory impressions, such that they lead to different predictions about what will happen.
Don’t get me wrong, there is a ton of pragmatic information you need to know in order to be able to do that effectively, so it’s really only that simple in principle.
Chief amongst the problems most people encounter is that C usually pays near-zero attention to what U is doing, and is exceptionally prone to making up propositions that “explain” U’s motives incorrectly.
(The other fairly big problem is that C is pretty bad at coming up with alternative perspectives or interpretations—we’re pretty good at doing that to other people’s ideas, but not our own.)
One thing that I sort of disagree with in your article, though, is that you can’t really view U as a reasoning “agent”. Remember: it isn’t conscious (not an agent), and it’s not singular (not an agent).
Unlike C, U can hold mutually-contradictory concepts—i.e., “double-binds”. It is also concerned only with the regulation of the expected future value of perceptually-derived measurements (which includes things like status, the state of one’s relationships, health, etc. -- really anything a human can value.)
U can and does engage in “unsupervised learning” to find ways to regulate these values, but it is limited to relatively blind variations, rather than devious plots. (But then, evolutionary searches can certainly lead to things that look like devious plots on occasion!)
However, it’s sheer anthropomorphizing to think of U as if it were an actual agent, as opposed to simply treating agency as a metaphor. To think of it as an agent tends to lead to the idea of conflict, and it also implies that if you attempt to change it, it will somehow fight back or push against being changed.
And indeed, it can sometimes appear that way, if you don’t realize that the agglomeration of regulated values we are calling “U” is not under any requirement to be self-consistent, except as a result of sensory perceptions (whether real, remembered, or imagined) occurring in close temporal bounds.
That is, you realized your issue with crossing the street was silly because you actually paid attention to it—the juxtaposition of sensory information causing U to update its model.
To put it another way, your “realizing it was silly” was how the algorithm of U updating on C feels from the inside.
Essentially (at a bit of an oversimplification) my work consists of teaching people to find what things to feel silly about, and to identify ways to think about them so that they do, in fact, feel silly about them. (It is, in fact a remarkably common response to making any sort of “U” change: the “Wait? You mean I don’t have to do it that way?” response.)
I generally agree with PJ, but not in this case. I don’t think that C exists, just U and sensory modalities that some parts of U can manipulate, and I think that U contains systems which are not actual agents and other systems that, while not conscious, ARE actual agents in the same sense that animals with a visual cortex are, others which are agents in the sense that animals that lack a visual cortex are, and other simple agents which can manipulate the sensory modalities in a stereotyped fashion (or censor the data coming into the sensory modalities) but which don’t seek goals (or maybe, not outward directed goals).
How has this affected your understanding of your values?
I think I may be confused about my values partly because I’m not carving myself into pieces like this.
I see my symbolic centers as essentially a forum within which mutually beneficial sometimes timeless trades are negotiated between agents, some of which run on the same brain and some of which run in parallel in multiple brains. It looks more appealing, from the outside, than what I used to do, since it believes that it should. From inside, its nice because I’m getting all these gains from trade.
C can hold mutually-contradictory concepts. C is very good at holding mutually-contradictory concepts. Perhaps even better than U (in as much as U is more likely to give things weights while C often thinks in absolutes.)
hmm, as I understood from the post Both C & U give due consideration in their own ways. this is too tangled for me though, could you elaborate on how C’s absolutes are so different from U’s stimuli->reaction?