This version is essentially Eliezer’s “complexity and fragility of values”, right? I suggest we keep calling it that, instead of “orthogonality” which again sounds like a too strong claim which makes it less likely for people to consider it seriously.
This version is essentially Eliezer’s “complexity and fragility of values”, right?
Basically, but there is a separate point here that greater optimization power doesn’t help with the problem and instead makes it worse. I agree that the word “orthogonality” is somewhat misleading.
This version is essentially Eliezer’s “complexity and fragility of values”, right? I suggest we keep calling it that, instead of “orthogonality” which again sounds like a too strong claim which makes it less likely for people to consider it seriously.
Basically, but there is a separate point here that greater optimization power doesn’t help with the problem and instead makes it worse. I agree that the word “orthogonality” is somewhat misleading.
David Dalrymple was nice enough to illustrate my concern with “orthogonality” just as we’re talking about it. :)
...which also presented an opportunity to make a consequentialist argument for FAI under the assumption that all AGIs are good.