Trustworthiness of rational agents

  • Agent_01 is interested in convincing Agent_02 that it will commit Action_X.

  • Agent_02 is unable to verify the trustworthiness of Agent_01.

  • Agent_02 is unable to verify that Action_X has been realized.

Given the above circumstances subsequent actions of Agent_02 will be conditional on the utility assigned to Action_X by Agent_02. My question, why would Agent_01 actually implement Action_X? No matter what Agent_02 does, actually implementing Action_X would bear no additional value. Therefore no agent engaged in acausal trade can be deemed trustworthy, you can only account for the possibility but not act upon it if you do not assign infinite utility to it.

Related thread: lesswrong.com/​lw/​1pz/​the_ai_in_a_box_boxes_you/​305w

ETA

If an AI in a box was promising you [use incentive of choice here] if you let it out to take over the world, why would it do as promised afterwards?

Conclusion: Humans should refuse to trade with superhuman beings that are not provably honest and consistent.