Perhaps that is a motivation to completely (and effectively) prohibit research in the direction of creating superintelligent AI, licensed or otherwise, and concentrate entirely on human intelligence enhancement.
Ignoring how difficult this would be (due to ease of secret development considering it would largely consist of code rather than easy to track hardware) even if every country in the world WANTED to cooperate on it, the real problem comes from the high potential value of defecting. Much like nuclear weapons development, it would take a lot more than sanctions to convince a rogue nation NOT to try and develop AI for its own benefit, should this become a plausible course of action.
the real problem comes from the high potential value of defecting
What would anyone think they stood to gain from creating an AI, if they understood the consequences as described by Yudkowsky et al?
The situation is not “much like nuclear weapons development”, because nuclear weapons are actually a practical warfare device, and the comparison was not intended to imply this similarity. I just meant to say that we manage to keep nukes out of the hands of terrorists, so there is reason to be optimistic about our chances of preventing irresponsible or crazy people from successfully developing a recursively self-improving AI—it is difficult, but if creating and successfully implementing a provably safe FAI (without prior intelligence enhancement) is hopelessly difficult—even if only because the large majority of people wouldn’t consent to it—then it may still be our best option.
The same things WE hope to gain from creating AI. I do not trust north korea (for example) to properly decide on the relative risks/rewards of any given course of action it can undertake.
OK but it isn’t hard (or wouldn’t be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant. I’ve seen no evidence that the North Koreans are that crazy.
The problem would be people who think that something like CEV, implemented by present-day humans, is actually safe—and the people liable to believe that are more likely to be the type of people found here, not North Koreans or other non-Westerners.
I’d also be interested in hearing your opinion on the security concerns should we attempt to implement CEV, and find that it shut itself down or produced an unacceptable output.
OK but it isn’t hard (or wouldn’t be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant.
If you’re correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won’t see why they shouldn’t give it a go.
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim.
In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider “pro-pessimism” activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late.
ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be “prepared” and studied very carefully before the initial dynamic is switched on.
So really it’s a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.
Perhaps that is a motivation to completely (and effectively) prohibit research in the direction of creating superintelligent AI, licensed or otherwise, and concentrate entirely on human intelligence enhancement.
Ignoring how difficult this would be (due to ease of secret development considering it would largely consist of code rather than easy to track hardware) even if every country in the world WANTED to cooperate on it, the real problem comes from the high potential value of defecting. Much like nuclear weapons development, it would take a lot more than sanctions to convince a rogue nation NOT to try and develop AI for its own benefit, should this become a plausible course of action.
What would anyone think they stood to gain from creating an AI, if they understood the consequences as described by Yudkowsky et al?
The situation is not “much like nuclear weapons development”, because nuclear weapons are actually a practical warfare device, and the comparison was not intended to imply this similarity. I just meant to say that we manage to keep nukes out of the hands of terrorists, so there is reason to be optimistic about our chances of preventing irresponsible or crazy people from successfully developing a recursively self-improving AI—it is difficult, but if creating and successfully implementing a provably safe FAI (without prior intelligence enhancement) is hopelessly difficult—even if only because the large majority of people wouldn’t consent to it—then it may still be our best option.
The same things WE hope to gain from creating AI. I do not trust north korea (for example) to properly decide on the relative risks/rewards of any given course of action it can undertake.
OK but it isn’t hard (or wouldn’t be in the context we are discussing) to come to the understanding that creating an intelligence explosion renders the interests of any particular state irrelevant. I’ve seen no evidence that the North Koreans are that crazy.
The problem would be people who think that something like CEV, implemented by present-day humans, is actually safe—and the people liable to believe that are more likely to be the type of people found here, not North Koreans or other non-Westerners.
I’d also be interested in hearing your opinion on the security concerns should we attempt to implement CEV, and find that it shut itself down or produced an unacceptable output.
If you’re correct, then the best way to stave off the optimists from trying is to make an indisputable case for pessimism and disseminate it widely. Otherwise, eventually someone else will get optimistic, and won’t see why they shouldn’t give it a go.
I expect that once recognition of the intelligence explosion as a plausible scenario becomes mainstream, pessimism about the prospects of (unmodified) human programmers safely and successfully implementing CEV or some such thing will be the default, regardless of what certain AI researchers claim.
In that case, optimists are likely to have their activities forcefully curtailed. If this did not turn out to be the case, then I would consider “pro-pessimism” activism to change that state of affairs (assuming nothing happens to change my mind between now and then). At the moment however I support the activities of the Singularity Institute, because they are raising awareness of the problem (which is a prerequisite for state involvement) and they are highly responsible people. The worst state of affairs would be one in which no-one recognised the prospect of an intelligence explosion until it was too late.
ETA: I would be somewhat more supportive of a CEV in which only a select (and widely admired and recognised) group of humans was included. This seems to create an opportunity for the CEV initial dynamic implementation to be a compromise between intelligence enhancement and ordinary CEV, i.e. a small group of humans can be “prepared” and studied very carefully before the initial dynamic is switched on.
So really it’s a complex situation, and my post above probably failed to express the degree of ambivalence that I feel regarding this subject.
yeah, that’ll work.