1) The default case is that AGI will neither be malevolent nor benevolent but will simply have no appreciation of human values and therefore does not care to protect them.
2) An AGI is likely to become more powerful than humans at some point. Given #1, such a being poses a danger.
3) Given #1,2, we have to figure out how to make AGI that does protect humans and humane values.
4) Human moral value is very complex and it is therefore extremely difficult to approach #3, but worth trying given the associated risks.
yada-yada-yada
You know what’s your problem? You and other risks from AI advocates are only talking to people with the same mindset or people who already share most of your assumptions.
Stop that. Go and talk to actual AI researchers. Or talk to Timothy Gowers, Holden Karnofsky etc.
See what actual experts, world-class mathematicians or even neuroscientists have to say. I have done it. If you can convince them then your arguments are strong. Otherwise you might just be fooling yourself.
(upvoted because it didn’t deserve to be negative)
You’re making strong assumptions about what I am, and who I’ve talked to :-)
I’ve talked with actual AI researchers and neuroscientists (I’m a mathematician myself) - we’re even holding conference full of these kinds of people. If we have time to go through the arguments, generally they end up agreeing with my position (which is that intelligence explosions are likely dangerous, and not improbable enough that we shouldn’t look into them). The people who I have least been able to convince are the philosophers, in fact.
You’re making strong assumptions about what I am, and who I’ve talked to :-)
Given my epistemic state it was a reasonable guess that you haven’t talked to a lot of people that do not already fit the SI/LW memeplex.
I’ve talked with actual AI researchers and neuroscientists (I’m a mathematician myself) - we’re even holding conference full of these kinds of people. If we have time to go through the arguments, generally they end up agreeing with my position (which is that intelligence explosions are likely dangerous, and not improbable enough that we shouldn’t look into them).
Fascinating. This does not reflect my experience at all. Have those people that ended up agreeing with you published their thoughts on the topic yet? How many of them have stopped working on AI and instead started to assess the risks associated with it?
I’d also like to know what conference you are talking about, other than the Singularity Summit where most speakers either disagree or talk about vaguely related research and ideas or unrelated science fiction scenarios.
There is also a difference between:
Friendly AI advocate: Hi, I think machines might become very smart at some point and we should think about possible dangers before we build such machines.
AI researcher: I agree, it’s always good to be cautious.
and
Friendly AI advocate: This is crunch time! Very soon superhuman AI will destroy all human value. Please stop working on AI and give us all your money so we can build friendly AI and take over the universe before an unfriendly AI can do it and turn everything into paperclips after making itself superhumanly smart within a matter of hours!
AI researcher: Wow, you’re right! I haven’t thought about this at all. Here is all my money, please save the world ASAP!
I am not trying to ridicule anything here. But there is a huge difference between having Peter Norvig speak at your conference about technological change and having him agree with you about risks from AI.
AI Researcher: “Fascinating! You should definitely look into this. Fortunately, my own research has no chance of producing a super intelligent AGI, so I’ll continue. Good luck son! The government should give you more money.”
AI Researcher: “Fascinating! You should definitely look into this. Fortunately, my own research has no chance of producing a super intelligent AGI, so I’ll continue. Good luck son! The government should give you more money.”
In other words, those researchers estimate the value of friendly AI research as a charitable cause to be the share of their taxes that the government would assign to it if they would even consider it in the first place, which they believe the government should.
It’s hard to tell how seriously they really take risks from AI given those information.
It sounds like:
AI Researcher: Great story son, try your luck with the government. I am going to continue to work on practical AI in the meantime.
Indeed. I feel the absence of good counter-arguments was a more useful indication than their eventual agreement.
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
If you are sufficiently vague, say “smarter than human intelligence is conceivable and might pose a danger”, it is only reasonable to anticipate counter-arguments from a handful of people like Roger Penrose.
If however you say that “1) it is likely that 2) we will create artificial general intelligence within this century that is 3) likely to undergo explosive recursive self-improvement, respectively become superhuman intelligent, 4) in a short enough time-frame to be uncontrollable, 5) to take over the universe in order to pursue its goals, 6) ignore 7) and thereby destroy all human values” and that “8) it is important to contribute money to save the world, 9) at this point in time, 10) by figuring out how to make such hypothetical AGI’s provably friendly and 11) that the Singularity Institute, respectively the Future of Humanity Institute, are the right organisations for this job”, then you can expect to hear counter-arguments.
If you weaken the odds of creating general intelligence to around 50-50, then virtually none have given decent counterarguments to 1)-7). The disconnect starts at 8)-11).
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
Quite strong evidence, at least for my position (which has somewhat wider error bars that SIAI’s). Most people who have thought about this at length tend to agree with me, and most arguments presented against it are laughably weak (hell, the best arguments against Whole Brain Emulations were presented by Anders Sandberg, an advocate of WBE).
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise (I feel I should add, in the interest of fairness, that neuroscientists seemed to put much lower probabilities on AGI ever happening in the first place).
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
“risky enough to bother about” could be interpreted as:
(in ascending order of importance)
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every 10 years.
There should be a study group whose sole purpose is to think about the issue.
All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise...
Could you elaborate on the “relevant expertise” that is necessary to agree with you?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI? Even those who are likely aware of all the relevant arguments. And what do you think which arguments the others are missing that would likely make them change their mind about the issue?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
As a species, we suck at prevention, and we suck especially at preventing things that have never happened before, and we suck especially especially at preventing things that don’t come from a clear enemy.
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
I have my doubts that if I would have written the relevant researchers about nuclear winter they would have told me that it is a fringe issue. Probably a lot would have told me that they can’t write about it in the midst of the cold war.
I also have my doubts that biologists would tell me that they think that the issue of risks from synthetic biology is just bunkers. Although quite a few would probably tell me that the risks are exaggerated.
Regarding the murder vs. asteroid funding. I am not sure that it was very irrational, in retrospect, to avoid asteroid funding until now. The additional amount of resources it would have taken to scan for asteroids a few decades ago versus now might outweigh the few decades in which nobody looked for possible asteroids on a collision course with earth. But I don’t have any data to back this up.
Oh yes, and I forgot one common answer, which generally means I need pay no more attention to their arguments, and can shift into pure convincing mode: “Since the risks are uncertain, we don’t need to worry.”
1) The default case is that AGI will neither be malevolent nor benevolent but will simply have no appreciation of human values and therefore does not care to protect them.
2) An AGI is likely to become more powerful than humans at some point. Given #1, such a being poses a danger.
3) Given #1,2, we have to figure out how to make AGI that does protect humans and humane values.
No. I actually pretty much agree with it. My whole point is that to reduce risks from AI you have to convince people who do not already share most of your beliefs. I wanted to make it abundantly clear that people who want to hone their arguments shouldn’t do so by asking people if they agree with them who are closely associated with the SI/LW memeplex. They have to hone their arguments by talking to people who actually disagree and figure out at what point their arguments fail.
See, it is very simple. If you are saying that all AI researchers and computer scientists agree with you, then risks from AI are pretty much solved insofar that everyone who could possible build an AGI is already aware of the risks and probably takes precautions (which is not enough of course, but that isn’t the point).
I am saying that you might be fooling yourself if you say, “I’ve been to the Singularity Summit and talked to a lot of smart people at LW meetups and everyone agreed with me on risks from AI, nobody had any counter-arguments”. Wow, no shit? I mean, what do you anticipate if you visit a tea party meeting arguing how Obama is doing a bad job?
I believe that I have a pretty good idea on what arguments would be perceived to be weak or poorly argued since I am talking to a lot of people that disagree with SI/LW on some important points. And if I tell you that your arguments are weak then that doesn’t mean that I disagree or that you are all idiots. It just means that you’ve to hone your arguments if you want to convince others.
But maybe you believe that there are no important people left who it would be worthwhile to have on your side. Then of course what I am saying is unnecessary. But I doubt that this is the case. And even if it is the case, honing your arguments might come in handy once you are forced to talk to politicians or other people with a large inferential distance.
1) The default case is that AGI will neither be malevolent nor benevolent but will simply have no appreciation of human values and therefore does not care to protect them.
2) An AGI is likely to become more powerful than humans at some point. Given #1, such a being poses a danger.
3) Given #1,2, we have to figure out how to make AGI that does protect humans and humane values.
4) Human moral value is very complex and it is therefore extremely difficult to approach #3, but worth trying given the associated risks.
yada-yada-yada
You know what’s your problem? You and other risks from AI advocates are only talking to people with the same mindset or people who already share most of your assumptions.
Stop that. Go and talk to actual AI researchers. Or talk to Timothy Gowers, Holden Karnofsky etc.
See what actual experts, world-class mathematicians or even neuroscientists have to say. I have done it. If you can convince them then your arguments are strong. Otherwise you might just be fooling yourself.
(upvoted because it didn’t deserve to be negative)
You’re making strong assumptions about what I am, and who I’ve talked to :-)
I’ve talked with actual AI researchers and neuroscientists (I’m a mathematician myself) - we’re even holding conference full of these kinds of people. If we have time to go through the arguments, generally they end up agreeing with my position (which is that intelligence explosions are likely dangerous, and not improbable enough that we shouldn’t look into them). The people who I have least been able to convince are the philosophers, in fact.
Given my epistemic state it was a reasonable guess that you haven’t talked to a lot of people that do not already fit the SI/LW memeplex.
Fascinating. This does not reflect my experience at all. Have those people that ended up agreeing with you published their thoughts on the topic yet? How many of them have stopped working on AI and instead started to assess the risks associated with it?
I’d also like to know what conference you are talking about, other than the Singularity Summit where most speakers either disagree or talk about vaguely related research and ideas or unrelated science fiction scenarios.
There is also a difference between:
Friendly AI advocate: Hi, I think machines might become very smart at some point and we should think about possible dangers before we build such machines.
AI researcher: I agree, it’s always good to be cautious.
and
Friendly AI advocate: This is crunch time! Very soon superhuman AI will destroy all human value. Please stop working on AI and give us all your money so we can build friendly AI and take over the universe before an unfriendly AI can do it and turn everything into paperclips after making itself superhumanly smart within a matter of hours!
AI researcher: Wow, you’re right! I haven’t thought about this at all. Here is all my money, please save the world ASAP!
I am not trying to ridicule anything here. But there is a huge difference between having Peter Norvig speak at your conference about technological change and having him agree with you about risks from AI.
What it generally was:
AI Researcher: “Fascinating! You should definitely look into this. Fortunately, my own research has no chance of producing a super intelligent AGI, so I’ll continue. Good luck son! The government should give you more money.”
In other words, those researchers estimate the value of friendly AI research as a charitable cause to be the share of their taxes that the government would assign to it if they would even consider it in the first place, which they believe the government should.
It’s hard to tell how seriously they really take risks from AI given those information.
It sounds like:
AI Researcher: Great story son, try your luck with the government. I am going to continue to work on practical AI in the meantime.
Indeed. I feel the absence of good counter-arguments was a more useful indication than their eventual agreement.
How much evidence, that you are right, does the absence of counter-arguments actually constitute?
If you are sufficiently vague, say “smarter than human intelligence is conceivable and might pose a danger”, it is only reasonable to anticipate counter-arguments from a handful of people like Roger Penrose.
If however you say that “1) it is likely that 2) we will create artificial general intelligence within this century that is 3) likely to undergo explosive recursive self-improvement, respectively become superhuman intelligent, 4) in a short enough time-frame to be uncontrollable, 5) to take over the universe in order to pursue its goals, 6) ignore 7) and thereby destroy all human values” and that “8) it is important to contribute money to save the world, 9) at this point in time, 10) by figuring out how to make such hypothetical AGI’s provably friendly and 11) that the Singularity Institute, respectively the Future of Humanity Institute, are the right organisations for this job”, then you can expect to hear counter-arguments.
If you weaken the odds of creating general intelligence to around 50-50, then virtually none have given decent counterarguments to 1)-7). The disconnect starts at 8)-11).
Quite strong evidence, at least for my position (which has somewhat wider error bars that SIAI’s). Most people who have thought about this at length tend to agree with me, and most arguments presented against it are laughably weak (hell, the best arguments against Whole Brain Emulations were presented by Anders Sandberg, an advocate of WBE).
I find the arguments in favour of the risk thesis compelling, and when they have the time to go through it, so do most other people with relevant expertise (I feel I should add, in the interest of fairness, that neuroscientists seemed to put much lower probabilities on AGI ever happening in the first place).
Of course the field is a bit odd, doesn’t have a wide breadth of researchers, and there’s a definite deformation professionelle. But that’s not enough to change my risk assessment anywhere near to “not risky enough to bother about”.
“risky enough to bother about” could be interpreted as:
(in ascending order of importance)
Someone should actively think about the issue in their spare time.
It wouldn’t be a waste of money if someone was paid to think about the issue.
It would be good to have a periodic conference to evaluate the issue and reassess the risk every 10 years.
There should be a study group whose sole purpose is to think about the issue.
All relevant researchers should be made aware of the issue.
Relevant researchers should be actively cautious and think about the issue.
There should be an academic task force that actively tries to tackle the issue.
It should be actively tried to raise money to finance an academic task force to solve the issue.
The general public should be made aware of the issue to gain public support.
The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.
Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.
Could you elaborate on the “relevant expertise” that is necessary to agree with you?
Further, why do you think does everyone I asked about the issue either disagree or continue to ignore the issue and work on AI? Even those who are likely aware of all the relevant arguments. And what do you think which arguments the others are missing that would likely make them change their mind about the issue?
Because people always do this with large scale existential risks, especially ones that sound fringe. Why were there so few papers published on Nuclear Winter? What proportion of money was set aside for tracking near-earth objects as opposed to, say, extra police to handle murder investigations? Why is the World Health Organisations’s budget 0.006% of world GDP (with the CDC only twice as large)? Why are the safety requirements playing catch-up with the dramatic progress in synthetic biology?
As a species, we suck at prevention, and we suck especially at preventing things that have never happened before, and we suck especially especially at preventing things that don’t come from a clear enemy.
I have my doubts that if I would have written the relevant researchers about nuclear winter they would have told me that it is a fringe issue. Probably a lot would have told me that they can’t write about it in the midst of the cold war.
I also have my doubts that biologists would tell me that they think that the issue of risks from synthetic biology is just bunkers. Although quite a few would probably tell me that the risks are exaggerated.
Regarding the murder vs. asteroid funding. I am not sure that it was very irrational, in retrospect, to avoid asteroid funding until now. The additional amount of resources it would have taken to scan for asteroids a few decades ago versus now might outweigh the few decades in which nobody looked for possible asteroids on a collision course with earth. But I don’t have any data to back this up.
Oh yes, and I forgot one common answer, which generally means I need pay no more attention to their arguments, and can shift into pure convincing mode: “Since the risks are uncertain, we don’t need to worry.”
Well said. Or, at least a good start.
Oh. Was the earlier part supposed to be satire?
No. I actually pretty much agree with it. My whole point is that to reduce risks from AI you have to convince people who do not already share most of your beliefs. I wanted to make it abundantly clear that people who want to hone their arguments shouldn’t do so by asking people if they agree with them who are closely associated with the SI/LW memeplex. They have to hone their arguments by talking to people who actually disagree and figure out at what point their arguments fail.
See, it is very simple. If you are saying that all AI researchers and computer scientists agree with you, then risks from AI are pretty much solved insofar that everyone who could possible build an AGI is already aware of the risks and probably takes precautions (which is not enough of course, but that isn’t the point).
I am saying that you might be fooling yourself if you say, “I’ve been to the Singularity Summit and talked to a lot of smart people at LW meetups and everyone agreed with me on risks from AI, nobody had any counter-arguments”. Wow, no shit? I mean, what do you anticipate if you visit a tea party meeting arguing how Obama is doing a bad job?
I believe that I have a pretty good idea on what arguments would be perceived to be weak or poorly argued since I am talking to a lot of people that disagree with SI/LW on some important points. And if I tell you that your arguments are weak then that doesn’t mean that I disagree or that you are all idiots. It just means that you’ve to hone your arguments if you want to convince others.
But maybe you believe that there are no important people left who it would be worthwhile to have on your side. Then of course what I am saying is unnecessary. But I doubt that this is the case. And even if it is the case, honing your arguments might come in handy once you are forced to talk to politicians or other people with a large inferential distance.