Time Travel, AI and Transparent Newcomb

Epistemic sta­tus: has “time travel” in the ti­tle.

Let’s sup­pose, for the du­ra­tion of this post, that the lo­cal physics of our uni­verse al­lows for time travel. The ob­vi­ous ques­tion is: how are para­doxes pre­vented?

We may not have any idea how para­doxes are pre­vented, but pre­sum­ably there must be some pre­ven­tion mechanism. So, in a purely Bayesian sense, we can con­di­tion on para­doxes some­how not hap­pen­ing, and then ask what be­comes more or less likely. In gen­eral, any­thing which would make a time ma­chine more likely to be built should be­come less likely, and any­thing which would pre­vent a time ma­chine be­ing built should be­come more likely.

In other words: if we’re try­ing to do some­thing which would make time ma­chines more likely to be built, this ar­gu­ment says that we should ex­pect things to mys­te­ri­ously go wrong.

For in­stance, let’s say we’re try­ing to build some kind of pow­er­ful op­ti­miza­tion pro­cess which might find time ma­chines in­stru­men­tally use­ful for some rea­son. To the ex­tent that such a pro­cess is likely to build time ma­chines and in­duce para­doxes, we would ex­pect things to mys­te­ri­ously go wrong when try­ing to build the op­ti­mizer in the first place.

On the flip side: we could com­mit to de­sign­ing our pow­er­ful op­ti­miza­tion pro­cess so that it not only avoids build­ing time ma­chines, but also ac­tively pre­vents time ma­chines from be­ing built. Then the mys­te­ri­ous force should work in our fa­vor: we would ex­pect things to mys­te­ri­ously go well. We don’t need time-travel-pre­ven­tion to be the op­ti­miza­tion pro­cess’ sole ob­jec­tive here, it just needs to make time ma­chines suffi­ciently less likely to get an over­all drop in the prob­a­bil­ity of para­dox.