At the Singularity Summit’s “Meet and Greet”, I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.
I am FAR more in line with Ben’s position than with Eliezer’s (probably because both Ben and I are either Working or Studying directly on the “how to do” aspect of AI, rather than just concocting philosophical conundrums for AI, such as the “Paperclip Maximizer” scenario of Eliezer’s, which I find highly dubious).
AI isn’t going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven’t seen) popping up suddenly as a threat.
At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems… And, he and I also discussed the specific problems of “The Scary Idea” that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.
Also, WRT this comment:
For another example, you can’t train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.
You CANtrain (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don’t attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile… A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body… But, the point still stands… Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).
And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo’s African Expedition) who works with big cats ALL the time.
Back to the point about AI.
It is going to be next to impossible to solve the problem of “Friendly AI” without first creating AI systems that have social cognitive capacities. Just sitting around “Thinking” about it isn’t likely to be very helpful in resolving the problem.
That would be what Bertrand Russell calls “Gorging upon the Stew of every conceivable idea.”
Personally, I’m a lot more worried about nasty humans taking early-stage AGIs and using them for massive destruction, than about speculative risks associated with little-understood events like hard takeoffs.
A psychotic egoist like Stalin or an non-humanist like Hitler is indeed terrifying but I’m not convinced that giving a great increase in power and intelligence to someone like a Mao or a Lord Lytton, who caused millions of deaths by doing something they thought would improve people’s lives, would lead to a worse outcome than we got in reality. Granted, for something like the cultural revolution these mistakes might be subtle enough to get into an AI, but it’s hard to imagine them getting a computer to say “yes, the peasants can live on 500 calories a day, increase the tariff” unless they were deliberately trying to be wrong, which they weren’t.
Moral considerations aside, the real causes of the mass famines under Mao and Stalin can be understood from a perspective of pure power and political strategy. From the point of view of a strong centralizing regime trying to solidify its power, the peasants are always the biggest problem.
Urban populations are easy to control for any regime that firmly holds the reins of the internal security forces: just take over the channels of food distribution, ration the food, and make obedience a precondition for eating. Along with a credible threat to meet any attempts at rioting with bayonets and live bullets, this is enough to ensure obedience of the urban dwellers. In contrast, peasants always have the option of withdrawing into an autarkic self-sufficient lifestyle, and they will do it if pressed hard by taxation and requisitioning. In addition, they are widely dispersed, making it hard for the security forces to coerce them effectively. And in an indecisive long standoff, the peasants will eventually win, since without buying or confiscating their food surplus, everyone else starves to death.
Both the Russian and the Chinese communists understood that nothing but the most extreme measures would suffice to break the resistance of the peasantry. When the peasants responded to confiscatory measures by withdrawing to subsistence agriculture, they knew they’d have to send the armed forces to confiscate their subsistence food and let them starve, and eventually force the survivors into state-run enterprises where they’d have no more capacity for autarky than the urban populations. (In the Russian case, this job was done very incompletely during the Revolution, which was followed by a decade of economic liberalization, after which the regime finally felt strong enough to finish the job.)
(Also, it’s simply untenable to claim that this was due to some special brutality of Stalin and Mao. Here is a 1918 speech by Trotsky that discusses the issue in quite frank terms. Now of course, he’s trying to present it as a struggle against the minority of rich “kulaks,” not the poorer peasants, but as Zinoviev admitted a few years later, “We [the Bolsheviks] are fond of describing any peasant who has enough to eat as a kulak.”)
Not directly relivant, but Mao seems to have known that his policies were causing mass starvation. Of course, with a tame AGI he could have achieved communism with a very different kind of Great Leap.
Oh yes, I see I’ve inadvertently fallen into that sordid old bromide about communism being a good idea that unfortunately failed to work, still- committing to an action that one knows will cause millions of deaths is quite different to learning about it as one is doing it. Certainly in the case of the British in India, their Malthusian rhetoric and victim-blaming was so at odds with their earlier talk of modernizing the continent that it sounds like a post-hoc rationalization of the genocide. I realize now though that I don’t know enough about the PRC to judge whether a similar phenomenon was at work there.
Well… That is hard to communicate now, as I will need to extricate the problems from the specifics that were communicated to me (in confidence)...
Let’s see...
1) That there is a dangerous political movement in the USA that seems to be preferring revealed knowledge to scientific understanding and investigation.
2) Poverty
3) Education
4) Hunger (I myself suffer from this problem—I am disabled, on a fixed income, and while I am in school again and doing quite well I still have to make choices sometimes between necessities… And, I am quite well off compared to some I know)
5) The lack of a political dialog and the preference for ideological certitude over pragmatic solutions and realistic uncertainty.
6) The fact that there exist a great amount of crime among the white collar crowd that goes both unchecked, and unpunished when it is exposed (Maddoff was a fluke in that regard).
7) The various “Wars” that we declare on things (Drugs, Terrorism, etc.) “War” is a poor paradigm to use, and it leads to more damage than it corrects (especially in the two instances I cited)
8) The real “Wars” that are happening right now (and not just those waged by the USA and allies)
Some of these were explicitly discussed.
Some will eventually be resolved, but that doesn’t mean that they should be ignored until that time. That would be akin to seeing a man dying of starvation, while one has the capacity to feed him, yet thinking “Oh, he’ll get some food eventually.”
And, some may just be perennial problems with which we will have to deal with for some time to come.
I misread you as saying that important ethical problems about FAI were being ignored, but yes, the idea that FAI is the most important thing in the world leaves quite a bit out, and not just great evils. There’s a lot of maintenance to be done along the way to FAI.
Madoff’s fraud was initiated by a single human being, or possibly Madoff and his wife. It was comprehensible without adding a lot of what used to be specialist knowledge. It’s a much more manageable sort of crime than major institutions becoming destructively corrupt.
It is going to be next to impossible to solve the problem of “Friendly AI” without first creating AI systems that have social cognitive capacities. Just sitting around “Thinking” about it isn’t likely to be very helpful in resolving the problem.
I am guessing that this unpacks to “to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means—NLP?)”. Doing this gets us closer to FAI every day, while “thinking about it” doesn’t seem to.
First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?
Second, yes, perhaps whatever you’re tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the “Thinking” is not done first.
Third, if the big cat analogy did not work for you, try training a komodo dragon.
No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people’s behaviors in the future than with AI. People are improving systems as well.
As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:
As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it’s quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).
Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.
Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start… And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that—It’s where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).
While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.
This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.
I just don’t see that happening. I don’t see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.
I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit “dull witted” in comparison.
I don’t so much buy the “Ant/Amoeba to Human” comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don’t… They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn’t mean it is necessarily so, but it does seem to be more than less likely.
And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.
At the Singularity Summit’s “Meet and Greet”, I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.
I am FAR more in line with Ben’s position than with Eliezer’s (probably because both Ben and I are either Working or Studying directly on the “how to do” aspect of AI, rather than just concocting philosophical conundrums for AI, such as the “Paperclip Maximizer” scenario of Eliezer’s, which I find highly dubious).
AI isn’t going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven’t seen) popping up suddenly as a threat.
At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems… And, he and I also discussed the specific problems of “The Scary Idea” that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.
Also, WRT this comment:
You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don’t attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile… A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body… But, the point still stands… Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).
And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo’s African Expedition) who works with big cats ALL the time.
Back to the point about AI.
It is going to be next to impossible to solve the problem of “Friendly AI” without first creating AI systems that have social cognitive capacities. Just sitting around “Thinking” about it isn’t likely to be very helpful in resolving the problem.
That would be what Bertrand Russell calls “Gorging upon the Stew of every conceivable idea.”
What are the more important ethical problems?
Ben says:
http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html
That seems fairly reasonable. The SIAI are concerned that the engineers might screw up so badly that a bug takes over the world—and destroys everyone.
Another problem is if a Stalin or a Mao get hold of machine intelligence. The latter seems like a more obvious problem.
A psychotic egoist like Stalin or an non-humanist like Hitler is indeed terrifying but I’m not convinced that giving a great increase in power and intelligence to someone like a Mao or a Lord Lytton, who caused millions of deaths by doing something they thought would improve people’s lives, would lead to a worse outcome than we got in reality. Granted, for something like the cultural revolution these mistakes might be subtle enough to get into an AI, but it’s hard to imagine them getting a computer to say “yes, the peasants can live on 500 calories a day, increase the tariff” unless they were deliberately trying to be wrong, which they weren’t.
Moral considerations aside, the real causes of the mass famines under Mao and Stalin can be understood from a perspective of pure power and political strategy. From the point of view of a strong centralizing regime trying to solidify its power, the peasants are always the biggest problem.
Urban populations are easy to control for any regime that firmly holds the reins of the internal security forces: just take over the channels of food distribution, ration the food, and make obedience a precondition for eating. Along with a credible threat to meet any attempts at rioting with bayonets and live bullets, this is enough to ensure obedience of the urban dwellers. In contrast, peasants always have the option of withdrawing into an autarkic self-sufficient lifestyle, and they will do it if pressed hard by taxation and requisitioning. In addition, they are widely dispersed, making it hard for the security forces to coerce them effectively. And in an indecisive long standoff, the peasants will eventually win, since without buying or confiscating their food surplus, everyone else starves to death.
Both the Russian and the Chinese communists understood that nothing but the most extreme measures would suffice to break the resistance of the peasantry. When the peasants responded to confiscatory measures by withdrawing to subsistence agriculture, they knew they’d have to send the armed forces to confiscate their subsistence food and let them starve, and eventually force the survivors into state-run enterprises where they’d have no more capacity for autarky than the urban populations. (In the Russian case, this job was done very incompletely during the Revolution, which was followed by a decade of economic liberalization, after which the regime finally felt strong enough to finish the job.)
(Also, it’s simply untenable to claim that this was due to some special brutality of Stalin and Mao. Here is a 1918 speech by Trotsky that discusses the issue in quite frank terms. Now of course, he’s trying to present it as a struggle against the minority of rich “kulaks,” not the poorer peasants, but as Zinoviev admitted a few years later, “We [the Bolsheviks] are fond of describing any peasant who has enough to eat as a kulak.”)
Not directly relivant, but Mao seems to have known that his policies were causing mass starvation. Of course, with a tame AGI he could have achieved communism with a very different kind of Great Leap.
Oh yes, I see I’ve inadvertently fallen into that sordid old bromide about communism being a good idea that unfortunately failed to work, still- committing to an action that one knows will cause millions of deaths is quite different to learning about it as one is doing it. Certainly in the case of the British in India, their Malthusian rhetoric and victim-blaming was so at odds with their earlier talk of modernizing the continent that it sounds like a post-hoc rationalization of the genocide. I realize now though that I don’t know enough about the PRC to judge whether a similar phenomenon was at work there.
Well… That is hard to communicate now, as I will need to extricate the problems from the specifics that were communicated to me (in confidence)...
Let’s see...
1) That there is a dangerous political movement in the USA that seems to be preferring revealed knowledge to scientific understanding and investigation. 2) Poverty 3) Education 4) Hunger (I myself suffer from this problem—I am disabled, on a fixed income, and while I am in school again and doing quite well I still have to make choices sometimes between necessities… And, I am quite well off compared to some I know) 5) The lack of a political dialog and the preference for ideological certitude over pragmatic solutions and realistic uncertainty. 6) The fact that there exist a great amount of crime among the white collar crowd that goes both unchecked, and unpunished when it is exposed (Maddoff was a fluke in that regard). 7) The various “Wars” that we declare on things (Drugs, Terrorism, etc.) “War” is a poor paradigm to use, and it leads to more damage than it corrects (especially in the two instances I cited) 8) The real “Wars” that are happening right now (and not just those waged by the USA and allies)
Some of these were explicitly discussed.
Some will eventually be resolved, but that doesn’t mean that they should be ignored until that time. That would be akin to seeing a man dying of starvation, while one has the capacity to feed him, yet thinking “Oh, he’ll get some food eventually.”
And, some may just be perennial problems with which we will have to deal with for some time to come.
I misread you as saying that important ethical problems about FAI were being ignored, but yes, the idea that FAI is the most important thing in the world leaves quite a bit out, and not just great evils. There’s a lot of maintenance to be done along the way to FAI.
Madoff’s fraud was initiated by a single human being, or possibly Madoff and his wife. It was comprehensible without adding a lot of what used to be specialist knowledge. It’s a much more manageable sort of crime than major institutions becoming destructively corrupt.
I think major infrastructure rebuilding is probably closer to the case than “maintenance”
I am guessing that this unpacks to “to create and FAI you need some method to create AGI. For the later we need to create AI systems with social cognitive capabilities (whatever that means—NLP?)”. Doing this gets us closer to FAI every day, while “thinking about it” doesn’t seem to.
First, are you factually aware that some progress has been made in a decision theory that would give some guarantees about the future AI behavior?
Second, yes, perhaps whatever you’re tinkering with is getting closer to an AGI which is what FAI runs on. It is also getting us closer to and AGI which is not FAI, if the “Thinking” is not done first.
Third, if the big cat analogy did not work for you, try training a komodo dragon.
Yes, that is close to what I am proposing.
No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people’s behaviors in the future than with AI. People are improving systems as well.
As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:
“Gorging upon the stew of...”
Please take a look here: http://wiki.lesswrong.com/wiki/Decision_theory
As far as the dragon, I was just pointing out that some minds are not trainable, period. And even if training works well for some intelligent species like tigers, it’s quite likely that it will not be transferable (eating trainer, not ok, eating an baby, ok).
Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.
Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start… And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that—It’s where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).
While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.
This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.
I just don’t see that happening. I don’t see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.
I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit “dull witted” in comparison.
I don’t so much buy the “Ant/Amoeba to Human” comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don’t… They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn’t mean it is necessarily so, but it does seem to be more than less likely.
And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.