About the example in section 6.1.3: Do you have an idea of how the Steering Subsystem can tell that Zoe is trying to get your attention with her speech? It seems to me like that requires both (a) identifying that the speech is trying to get someone’s attention, and (b) identifying that the speech is directed at you. (Well, I guess (b) implies (a) if you weren’t visibly paying attention to her beforehand.)
About (a): If the Steering Subsystem doesn’t know the meaning of words, then how can it tell that Zoe is trying to get someone’s attention? Is there some way to tell from the sound of the voice? Or is it enough to know that there were no voices before and Zoe has just started talking now, so she’s probably trying to get someone’s attention to talk to them? (But that doesn’t cover all cases when Zoe would try to get someone’s attention.)
About (b): If you were facing Zoe, then you could tell if she was talking to you. If she said your name, then maybe the Steering Subsystem might recognize your name (having used interpretability to get it from the Learning Subsystem?) and know she was talking to you? Are there any other ways the Steering Subsystem could tell if she was talking to you?
I’m not sure how many false positives vs. false negatives evolution will “accept” here, so I’m not sure how precise a check to expect.
Do you have an idea of how the Steering Subsystem can tell that Zoe is trying to get your attention with her speech?
I think you’re thinking about that kinda the wrong way around.
You’re treating “the things that Zoe does when she wants to get my attention” as a cause, and “my brain reacts to that” as the effect.
But I would say that a better perspective is: everybody’s brain reacts to various cues (sound level, pitch, typical learned associations, etc.), and Zoe has learned through life experience how to get a person’s attention by tapping into those cues.
So for example: If Zoe says “hey” to me, and I don’t notice, then Zoe might repeat “hey” a bit louder, higher-pitched, and/or closer to my head, and maybe also wave her hand, and maybe also poke me.
The wrong question is: “how does my brain know that louder and higher-pitched and closer sounds, concurrent with waving-hand motions and pokes, ought to trigger an orienting reaction?”.
The right perspective is: we have these various evolved triggers for orienting reactions, whose details we can think of as arbitrary (it’s just whatever was effective for noticing predators and prey and so on), and Zoe has learned from life experience various ways to activate those triggers in other people.
If she said your name, then maybe the Steering Subsystem might recognize your name (having used interpretability to get it from the Learning Subsystem?) and know she was talking to you?
Yup, STEP 1 is one of my “thought assessors” (probably somewhere in the amygdala) has learned from life experience that hearing my own name should trigger orienting to that sound; and then STEP 2 is that Zoe in turn has learned from life experience that saying someone’s name is a good way to get their attention.
About the example in section 6.1.3: Do you have an idea of how the Steering Subsystem can tell that Zoe is trying to get your attention with her speech? It seems to me like that requires both (a) identifying that the speech is trying to get someone’s attention, and (b) identifying that the speech is directed at you. (Well, I guess (b) implies (a) if you weren’t visibly paying attention to her beforehand.)
About (a): If the Steering Subsystem doesn’t know the meaning of words, then how can it tell that Zoe is trying to get someone’s attention? Is there some way to tell from the sound of the voice? Or is it enough to know that there were no voices before and Zoe has just started talking now, so she’s probably trying to get someone’s attention to talk to them? (But that doesn’t cover all cases when Zoe would try to get someone’s attention.)
About (b): If you were facing Zoe, then you could tell if she was talking to you. If she said your name, then maybe the Steering Subsystem might recognize your name (having used interpretability to get it from the Learning Subsystem?) and know she was talking to you? Are there any other ways the Steering Subsystem could tell if she was talking to you?
I’m not sure how many false positives vs. false negatives evolution will “accept” here, so I’m not sure how precise a check to expect.
Good questions!
I think you’re thinking about that kinda the wrong way around.
You’re treating “the things that Zoe does when she wants to get my attention” as a cause, and “my brain reacts to that” as the effect.
But I would say that a better perspective is: everybody’s brain reacts to various cues (sound level, pitch, typical learned associations, etc.), and Zoe has learned through life experience how to get a person’s attention by tapping into those cues.
So for example: If Zoe says “hey” to me, and I don’t notice, then Zoe might repeat “hey” a bit louder, higher-pitched, and/or closer to my head, and maybe also wave her hand, and maybe also poke me.
The wrong question is: “how does my brain know that louder and higher-pitched and closer sounds, concurrent with waving-hand motions and pokes, ought to trigger an orienting reaction?”.
The right perspective is: we have these various evolved triggers for orienting reactions, whose details we can think of as arbitrary (it’s just whatever was effective for noticing predators and prey and so on), and Zoe has learned from life experience various ways to activate those triggers in other people.
Yup, STEP 1 is one of my “thought assessors” (probably somewhere in the amygdala) has learned from life experience that hearing my own name should trigger orienting to that sound; and then STEP 2 is that Zoe in turn has learned from life experience that saying someone’s name is a good way to get their attention.