I agree with the simulation aspect, sort of. I don’t know if similarity to myself is necessary, though. For instance throughout history people have been able to model and interact with traders from neighbouring or distant civilizations, even though they might think very differently.
I’d say predictability or simulationability is what makes us comfortable with an ‘other’. To me the scary aspect about an AI is the possibility that it’s behaviour can change radically and unpredictably(it gets an override update, some manchurian candidate type trigger). Humans can also deceive, but usually we are leaky deceivers. True psychopaths trigger a sort of innate, possibly violent, loathing in the people that discover them. AI’s, assuming they are easily modified(which is not a guarantee, ML neural nets are hard to modify in a … surgical manner), would basically be viewed as psychopaths, with the attendant constant stress.
Regarding anthropomorphism: I think the cause of it is that part of our brain is sort of hardcoded to simulate agents. And it seems to be the most computationally powerful part of our brain(or maybe not a part but a network a set of systems whatever).
The hardest problem facing our minds is the problem of other minds, so when confronted with a truly hard problem we attack it with our most powerful cognitive weapon: the agent simulation mechanisms in our brains.
As we understand natural processes we no longer need to simulate them as agents and we shift to weaker mental subsystems and start thinking about them mechanically.
Basically, humans behaviour is chaotic and no true rules exist, only heuristics. As we find true mechanistic rules for phenomena we no longer need to simulate them as we would humans(or other agents/hyper-chaotic systems).
For instance throughout history people have been able to model and interact with traders from neighbouring or distant civilizations, even though they might think very differently.
Humans think very very similarly to each other, compared with random minds from the space of possible minds. For example, we recognise anger, aggression, fear, and so on, and share a lot of cultural universals https://en.wikipedia.org/wiki/Cultural_universal
Is the space of possible minds really that huge(or maybe really that alien?), though? I agree about humans having … an instinctive ability to intuit the mental state of other humans. But isn’t that partly learnable as well? We port this simulation ability relatively well to animals once we get used to their tells. Would we really struggle to learn the tells of other minds, as long as they were somewhat consistent over time and didn’t have the ability to perfectly lie?
Like what’s a truly alien mind? At the end of the day we’re Turing complete, we can simulate any computational process, albeit inefficiently.
I agree with the simulation aspect, sort of. I don’t know if similarity to myself is necessary, though. For instance throughout history people have been able to model and interact with traders from neighbouring or distant civilizations, even though they might think very differently.
I’d say predictability or simulationability is what makes us comfortable with an ‘other’. To me the scary aspect about an AI is the possibility that it’s behaviour can change radically and unpredictably(it gets an override update, some manchurian candidate type trigger). Humans can also deceive, but usually we are leaky deceivers. True psychopaths trigger a sort of innate, possibly violent, loathing in the people that discover them. AI’s, assuming they are easily modified(which is not a guarantee, ML neural nets are hard to modify in a … surgical manner), would basically be viewed as psychopaths, with the attendant constant stress.
Regarding anthropomorphism: I think the cause of it is that part of our brain is sort of hardcoded to simulate agents. And it seems to be the most computationally powerful part of our brain(or maybe not a part but a network a set of systems whatever).
The hardest problem facing our minds is the problem of other minds, so when confronted with a truly hard problem we attack it with our most powerful cognitive weapon: the agent simulation mechanisms in our brains.
As we understand natural processes we no longer need to simulate them as agents and we shift to weaker mental subsystems and start thinking about them mechanically.
Basically, humans behaviour is chaotic and no true rules exist, only heuristics. As we find true mechanistic rules for phenomena we no longer need to simulate them as we would humans(or other agents/hyper-chaotic systems).
Humans think very very similarly to each other, compared with random minds from the space of possible minds. For example, we recognise anger, aggression, fear, and so on, and share a lot of cultural universals https://en.wikipedia.org/wiki/Cultural_universal
Is the space of possible minds really that huge(or maybe really that alien?), though? I agree about humans having … an instinctive ability to intuit the mental state of other humans. But isn’t that partly learnable as well? We port this simulation ability relatively well to animals once we get used to their tells. Would we really struggle to learn the tells of other minds, as long as they were somewhat consistent over time and didn’t have the ability to perfectly lie?
Like what’s a truly alien mind? At the end of the day we’re Turing complete, we can simulate any computational process, albeit inefficiently.