About the example in section 6.1.3: Do you have an idea of how the Steering Subsystem can tell that Zoe is trying to get your attention with her speech? It seems to me like that requires both (a) identifying that the speech is trying to get someone’s attention, and (b) identifying that the speech is directed at you. (Well, I guess (b) implies (a) if you weren’t visibly paying attention to her beforehand.)
About (a): If the Steering Subsystem doesn’t know the meaning of words, then how can it tell that Zoe is trying to get someone’s attention? Is there some way to tell from the sound of the voice? Or is it enough to know that there were no voices before and Zoe has just started talking now, so she’s probably trying to get someone’s attention to talk to them? (But that doesn’t cover all cases when Zoe would try to get someone’s attention.)
About (b): If you were facing Zoe, then you could tell if she was talking to you. If she said your name, then maybe the Steering Subsystem might recognize your name (having used interpretability to get it from the Learning Subsystem?) and know she was talking to you? Are there any other ways the Steering Subsystem could tell if she was talking to you?
I’m not sure how many false positives vs. false negatives evolution will “accept” here, so I’m not sure how precise a check to expect.
The “ideas” link doesn’t seem to work.