This version is essentially Eliezer’s “complexity and fragility of values”, right?
Basically, but there is a separate point here that greater optimization power doesn’t help with the problem and instead makes it worse. I agree that the word “orthogonality” is somewhat misleading.
Basically, but there is a separate point here that greater optimization power doesn’t help with the problem and instead makes it worse. I agree that the word “orthogonality” is somewhat misleading.
David Dalrymple was nice enough to illustrate my concern with “orthogonality” just as we’re talking about it. :)
...which also presented an opportunity to make a consequentialist argument for FAI under the assumption that all AGIs are good.