“I believe X to be like me” ⇒ “whatever I decide, X will decide also” seems tenuous without some proof of likeness that is beyond any guarantee possible in humans...
Maybe you could aspire to such determinism in a proven-correct software system running on proven-robust hardware.
Well, yeah, this is primarily a theory for AIs dealing with other AIs.
You could possibly talk about human applications if you knew that the N of you had the same training as rationalists, or if you assigned probabilities to the others having such training.
Well, yeah, this is primarily a theory for AIs dealing with other AIs.
You could possibly talk about human applications if you knew that the N of you had the same training as rationalists, or if you assigned probabilities to the others having such training.
For X to be able to model the decisions of Y with 100% accuracy, wouldn’t X require a more sophisticated model?
If so, why would supposedly symmetrical models retain this symmetry?
Nope. http://arxiv.org/abs/1401.5577