Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I’m aware of the need for Friendliness in Obedient AI.)
Because the AI is better at estimating the consequences of following an order than the person giving the order.
There also the issue that the AI is likely to act in a way that changes the order that the person gives if it’s own utility criteria are about fulfilling orders.
Also, even assuming a “right” way of making obedient FAI is found (for example, one that warns you if you’re asking for something that might bite you in the ass later), there remains the problem of who is allowed to give orders to the AI. Power corrupts, etc.
Because the AI is better at estimating the consequences of following an order than the person giving the order.
There also the issue that the AI is likely to act in a way that changes the order that the person gives if it’s own utility criteria are about fulfilling orders.
Also, even assuming a “right” way of making obedient FAI is found (for example, one that warns you if you’re asking for something that might bite you in the ass later), there remains the problem of who is allowed to give orders to the AI. Power corrupts, etc.