Pick a goal where your success doesn’t directly cause obvious problems
I agree but I’m afraid value alignment doesn’t meet this criterion. (I’m copy pasting my response on VA from elsewhere below).
I don’t think value alignment of a super-takeover AI would be a good idea, for the following reasons:
1) It seems irreversible. If we align with the wrong values, there seems little anyone can do about it after the fact.
2) The world is chaotic, and externalities are impossible to predict. Who would have guessed that the industrial revolution would lead to climate change? I think it’s very likely that an ASI will produce major, unforseeable externalities over time. If we have aligned it in an irreversible way, we can’t correct for externalities happening down the road. (Speed also makes it more likely that we can’t correct in time, so I think we should try to go slow).
3) There is no agreement on which values are ‘correct’. Personally, I’m a moral relativist, meaning I don’t believe in moral facts. Although perhaps niche among rationalists and EAs, I think a fair amount of humans shares my beliefs. In my opinion, a value-aligned AI would not make the world objectively better, but merely change it beyond recognition, regardless of the specific values implemented (although it would be important which values are implemented). It’s very uncertain whether such change would be considered as net positive by any surviving humans.
4) If one thinks that consciousness implies moral relevance, AIs will be conscious, creating more happy morally relevant beings is morally good (as MacAskill defends), and AIs are more efficient than humans and other animals, the consequence seems to be that we (and all other animals) will be replaced by AIs. I consider that an existentially bad outcome in itself, and value alignment could point straight at it.
I think at a minimum, any alignment plan would need to be reversible by humans, and to my understanding value alignment is not. I’m somewhat more hopeful about intent alignment and e.g. a UN commission providing the AI’s input.
The killer app for ASI is, and always has been, to have it take over the world and stop humans from screwing things up
I strongly disagree with this being a good outcome, I guess mostly because I would expect the majority of humans to not want this. If humans would actually elect an AI to be in charge, and they could be voted out as well, I could live with that. But a takeover by force from an AI is as bad for me as a takeover by force from a human, and much worse if it’s irreversible. If an AI is really such a good leader, let them show it by being elected (if humans decide that an AI should be allowed to run at all).
I agree but I’m afraid value alignment doesn’t meet this criterion. (I’m copy pasting my response on VA from elsewhere below).
I don’t think value alignment of a super-takeover AI would be a good idea, for the following reasons:
1) It seems irreversible. If we align with the wrong values, there seems little anyone can do about it after the fact.
2) The world is chaotic, and externalities are impossible to predict. Who would have guessed that the industrial revolution would lead to climate change? I think it’s very likely that an ASI will produce major, unforseeable externalities over time. If we have aligned it in an irreversible way, we can’t correct for externalities happening down the road. (Speed also makes it more likely that we can’t correct in time, so I think we should try to go slow).
3) There is no agreement on which values are ‘correct’. Personally, I’m a moral relativist, meaning I don’t believe in moral facts. Although perhaps niche among rationalists and EAs, I think a fair amount of humans shares my beliefs. In my opinion, a value-aligned AI would not make the world objectively better, but merely change it beyond recognition, regardless of the specific values implemented (although it would be important which values are implemented). It’s very uncertain whether such change would be considered as net positive by any surviving humans.
4) If one thinks that consciousness implies moral relevance, AIs will be conscious, creating more happy morally relevant beings is morally good (as MacAskill defends), and AIs are more efficient than humans and other animals, the consequence seems to be that we (and all other animals) will be replaced by AIs. I consider that an existentially bad outcome in itself, and value alignment could point straight at it.
I think at a minimum, any alignment plan would need to be reversible by humans, and to my understanding value alignment is not. I’m somewhat more hopeful about intent alignment and e.g. a UN commission providing the AI’s input.
I strongly disagree with this being a good outcome, I guess mostly because I would expect the majority of humans to not want this. If humans would actually elect an AI to be in charge, and they could be voted out as well, I could live with that. But a takeover by force from an AI is as bad for me as a takeover by force from a human, and much worse if it’s irreversible. If an AI is really such a good leader, let them show it by being elected (if humans decide that an AI should be allowed to run at all).