The problem of emulating human “minds” might be much more difficult than just emulating the human brain. Here are three quotes that will highlight why this might be the case:
What we call “mind” is really embodied. There is no true separation of mind and body. These are not two independent entities that somehow come together and couple. The word “mental” picks out those bodily capacities and performances that constitute our awareness and determine our creative and constructive responses to the situation we encounter. Mind isn’t some mysterious abstract entity that we bring to bear on our experience. Rather, mind is part of the very structure and fabric of our interactions with our world.
They say the division between mind and environment is less rigid than previously thought; the mind uses information within the environment as an extension of itself.
While a person can learn a route through a maze and then negotiate the maze by memory, a person would appear equally smart to an outsider if they simply followed signposts in the maze to reach the exit. “A smart person, like the droplets, is often smart due to canny combinations of internal and external structure,” says Clark.
It’s widely thought that human language evolved in universally similar ways, following trajectories common across place and culture, and possibly reflecting common linguistic structures in our brains. But a massive, millennium-spanning analysis of humanity’s major language families suggests otherwise.
Instead, language seems to have evolved along varied, complicated paths, guided less by neurological settings than cultural circumstance. If our minds do shape the evolution of language, it’s likely at levels deeper and more nuanced than many researchers anticipated.
“It’s terribly important to understand human cognition, and how the human mind is put together,” said Michael Dunn, an evolutionary linguist at Germany’s Max Planck Institute and co-author of the new study, published April 14 in Nature. The findings “do not support simple ideas of the mind as a computer, with a language processor plugged in. They support much-more complex ideas of how language arises.”
I don’t know what to make of this as I haven’t done any research into it but I thought it should be accounted for when one wants to talk about the emulation of “minds”. It seems a lot of what makes us human, intelligent and what shapes our languages, values and goals seems to be a complex interrelationship between our brain, body, culture and the environment.
Most of the quotes above (at least, the ones that make sense) are talking about the way that intelligence grows in the first place, not about what would happen if you changed the context for a grown adult brain. Since a person paralyzed in an accident or stroke can nevertheless keep their mental faculties, it seems that changing the connection between the brain and its body/environment need not destroy the intellect that’s already formed.
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Whole brain emulation will probably work irregardless of the environment or body, as long as you use a “grown up” mind. What I thought needs to be addressed is the potential problem with emulating empty mind templates without a rich environment or bodily sensations and still expect them to exhibit “general” intelligence, i.e. solve problems in the physical and social universe.
The same might be true for seed AI. It will be able to use its given capabilities but needs some sort of fuel to solve “real life” problems like social engineering.
An example would be a boxed seed AI that is going FOOM. Either the ability to trick people into letting it out of the box is given or it needs to be acquired. How is it going to acquire it?
If a seed AI is closer to AIXI, i.e. intelligence in its most abstract form, it might need to be bodily embedded into the environment it is supposed to master. Consequently an AI that is capable of taking over the world by using an Internet connection will require a lot more hard-coded, concrete “intelligence” or a lot of time.
I just don’t see how an abstract AGI could possibly solve something like social engineering without a lot of time or the hard coded ability to do so.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.
So even if we’re talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
There seem to be some arguments in favor of embodied cognition...
The problem of emulating human “minds” might be much more difficult than just emulating the human brain. Here are three quotes that will highlight why this might be the case:
Philosophy in the Flesh, by George Lakoff and Mark Johnson
What a maze-solving oil drop tells us of intelligence (Original)
Evolution of Language Takes Unexpected Turn (See also, Is Grammar More Cultural Than Universal? Study Challenges Chomsky’s Theory)
Even more: Embodied cognition
I don’t know what to make of this as I haven’t done any research into it but I thought it should be accounted for when one wants to talk about the emulation of “minds”. It seems a lot of what makes us human, intelligent and what shapes our languages, values and goals seems to be a complex interrelationship between our brain, body, culture and the environment.
Most of the quotes above (at least, the ones that make sense) are talking about the way that intelligence grows in the first place, not about what would happen if you changed the context for a grown adult brain. Since a person paralyzed in an accident or stroke can nevertheless keep their mental faculties, it seems that changing the connection between the brain and its body/environment need not destroy the intellect that’s already formed.
Also, it would be pretty reasonable to simulate some kind of body and environment (in less detail than one simulates the brain) while you’re at it. Would that address your query?
Whole brain emulation will probably work irregardless of the environment or body, as long as you use a “grown up” mind. What I thought needs to be addressed is the potential problem with emulating empty mind templates without a rich environment or bodily sensations and still expect them to exhibit “general” intelligence, i.e. solve problems in the physical and social universe.
The same might be true for seed AI. It will be able to use its given capabilities but needs some sort of fuel to solve “real life” problems like social engineering.
An example would be a boxed seed AI that is going FOOM. Either the ability to trick people into letting it out of the box is given or it needs to be acquired. How is it going to acquire it?
If a seed AI is closer to AIXI, i.e. intelligence in its most abstract form, it might need to be bodily embedded into the environment it is supposed to master. Consequently an AI that is capable of taking over the world by using an Internet connection will require a lot more hard-coded, concrete “intelligence” or a lot of time.
I just don’t see how an abstract AGI could possibly solve something like social engineering without a lot of time or the hard coded ability to do so.
Just imagine you emulated a grown up human mind and it wanted to become a pick up artist, how would it do that with an Internet connection? It would need some sort of avatar at least and then wait for the environment to provide a lot of feedback.
So even if we’re talking about the emulation of a grown up mind it will be really hard to acquire some capabilities. Then how is the emulation of a human toddler going to acquire those skills? Even worse, how is some sort of abstract AGI going to do it that misses all of the hard coded capabilities of a human toddler?
There seem to be some arguments in favor of embodied cognition...