I would assign a malicious superintelligence a higher probability than would pure entropy over the space of superintelligences due to the chance of something broken coming out of military research. I would assign this a relatively low likelihood. I am not certain whether I would assign it a higher or lower likelihood than “automatically Friendly ones”—it depends on what you mean by that. I would assign it a higher probability than an AI built without any thought to friendliness being friendly, given that it was built with thought to maliciousness, and there are perhaps a broader range of behaviors we might label “malicious”.
By an “automatically Friendly AI” I simply meant one that was Friendly without explicit programming for friendliness. I think that would be more likely than a malicious AI because there are good, rational reasons to be “friendly” (benefits from trade and so on) in the absence of reasons not to be. I can see no rational reason to be malicious—humans that are malicious are usually so for reasons (sadism, revenge, and so on) that I can’t see someone programming into an AI.
good, rational reasons to be “friendly” (benefits from trade and so on)
Is that why humans have been so friendly to the non-human inhabitants of lands we want to develop? Humans are likely to have almost nothing to offer an advanced super-intelligence, just as an ant hill has almost nothing to offer me (except as an opportunity to destroy it and plant more grass).
There are good, rational reasons to be friendly in the short term.
The rational reason to be unfriendly in the long term is that sufficiently advanced optimizing processes are powerful, and outcomes that maximize the utility of one agent are not likely to also maximize the utility of other agents with different goals.
there are good, rational reasons to be “friendly” (benefits from trade and so on)
that is a very dangerous statement. A superintelligent AI doesn’t care about you one bit. If it is (unlikely) in the situation where it needs something from you that it cannot take with violence, it may offer to trade, but I would give high confidence of it shooting you in the back and taking the goods the moment you let your guard down.
I would assign a malicious superintelligence a higher probability than would pure entropy over the space of superintelligences due to the chance of something broken coming out of military research. I would assign this a relatively low likelihood. I am not certain whether I would assign it a higher or lower likelihood than “automatically Friendly ones”—it depends on what you mean by that. I would assign it a higher probability than an AI built without any thought to friendliness being friendly, given that it was built with thought to maliciousness, and there are perhaps a broader range of behaviors we might label “malicious”.
By an “automatically Friendly AI” I simply meant one that was Friendly without explicit programming for friendliness. I think that would be more likely than a malicious AI because there are good, rational reasons to be “friendly” (benefits from trade and so on) in the absence of reasons not to be. I can see no rational reason to be malicious—humans that are malicious are usually so for reasons (sadism, revenge, and so on) that I can’t see someone programming into an AI.
Is that why humans have been so friendly to the non-human inhabitants of lands we want to develop? Humans are likely to have almost nothing to offer an advanced super-intelligence, just as an ant hill has almost nothing to offer me (except as an opportunity to destroy it and plant more grass).
There are good, rational reasons to be friendly in the short term.
The rational reason to be unfriendly in the long term is that sufficiently advanced optimizing processes are powerful, and outcomes that maximize the utility of one agent are not likely to also maximize the utility of other agents with different goals.
that is a very dangerous statement. A superintelligent AI doesn’t care about you one bit. If it is (unlikely) in the situation where it needs something from you that it cannot take with violence, it may offer to trade, but I would give high confidence of it shooting you in the back and taking the goods the moment you let your guard down.