The problem that arises with this point of view is that you have not defined one rightness, you have defined approximately 6 billion rightnesses, one for each person on the planet, and they are all different. Some—perhaps most of them—are not views that the readers of this blog would identify with.
The question of whose rightness gets to go into the AI still arises, and I don’t think that the solution you have outlined is really up to the task of producing a notion of rightness that everyone on the planet agrees with. Not that I blame you: it’s an impossible task!
I concede that the ethical system for a superintelligent seed AI is not the place to try out new moral theories. The ideal situation would be one where the change of substrate—of intelligence moving from flesh to silicon—is done without any change of ethical outlook, so as to minimize the risk of something uncalled for happening.
I would endorse a more limited effort which focused on recreating the most commonly accepted values our society: namely rational western values. I would also want to work on capturing the values of our society as a narrow AI problem before one switches on a putative seed AGI. Such an effort might involve extensive data mining and testing and calibration in real world. This would come closer to the ideal of minimizing the amount that mind changes whilst substrate changes. Attempting to synthesize and extrapolate the widely differing values of every human on the planet is something that has never been attempted before, and is a bad idea to try and do anything new and risky at the same time as switching on a seed AI.
I think that there is a lot to be said about realist and objective ethics: the application of such work is not to seed AI, though. It is to the other possible routes to superintelligence and advanced technology, which will likely happen under the guidance of human society at large. Technology policy decisions require an ethical and value outlook, so it is worth thinking about how to simplify and unify human values. This doesn’t actually contradict what you’ve said: you talk about the
“total trajectory arising out of that entire framework”
and for me, as for many philosophically minded people, attempting to unify and simplify our value framework is part of the trajectory.
I think that ethical guidance for technology policy decisions is probably marginally more urgent than ethical guidance for seed AIs—merely because there is very little chance of anyone writing a recursively self-improving seed AI in the next 10 years. In the future this will probably change. I still think that ethical systems for seed AI is an extremely important task.
@Eliezer:
The problem that arises with this point of view is that you have not defined one rightness, you have defined approximately 6 billion rightnesses, one for each person on the planet, and they are all different. Some—perhaps most of them—are not views that the readers of this blog would identify with.
The question of whose rightness gets to go into the AI still arises, and I don’t think that the solution you have outlined is really up to the task of producing a notion of rightness that everyone on the planet agrees with. Not that I blame you: it’s an impossible task!
I concede that the ethical system for a superintelligent seed AI is not the place to try out new moral theories. The ideal situation would be one where the change of substrate—of intelligence moving from flesh to silicon—is done without any change of ethical outlook, so as to minimize the risk of something uncalled for happening.
I would endorse a more limited effort which focused on recreating the most commonly accepted values our society: namely rational western values. I would also want to work on capturing the values of our society as a narrow AI problem before one switches on a putative seed AGI. Such an effort might involve extensive data mining and testing and calibration in real world. This would come closer to the ideal of minimizing the amount that mind changes whilst substrate changes. Attempting to synthesize and extrapolate the widely differing values of every human on the planet is something that has never been attempted before, and is a bad idea to try and do anything new and risky at the same time as switching on a seed AI.
I think that there is a lot to be said about realist and objective ethics: the application of such work is not to seed AI, though. It is to the other possible routes to superintelligence and advanced technology, which will likely happen under the guidance of human society at large. Technology policy decisions require an ethical and value outlook, so it is worth thinking about how to simplify and unify human values. This doesn’t actually contradict what you’ve said: you talk about the
“total trajectory arising out of that entire framework”
and for me, as for many philosophically minded people, attempting to unify and simplify our value framework is part of the trajectory.
I think that ethical guidance for technology policy decisions is probably marginally more urgent than ethical guidance for seed AIs—merely because there is very little chance of anyone writing a recursively self-improving seed AI in the next 10 years. In the future this will probably change. I still think that ethical systems for seed AI is an extremely important task.