A Little Puzzle about Termination

[Final Update: Back to ‘Discussion’; stroked out the initial framing which was misleading.]

[Update: Moved to ‘Main’. Also, judging by the comments, it appears that most have misunderstood the puzzle and read way too much into it; user ‘Manfred’ seems to have got the point.]

[Note: This little puzzle is my first article. Preliminary feedback suggests some of you might enjoy it while others might find it too obvious, hence the cautious submission to ‘Discussion’; will move it to ‘Main’ if, and only if, it’s well-received.]


In his recent paper “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Nick Bostrom states:

Even an agent that has an apparently very limited final goal, such as “to make 32 paperclips”, could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so. For example, even after an expected-utility-maximizing agent had built 32 paperclips, it could use some extra resources to verify that it had indeed successfully built 32 paperclips meeting all the specifications (and, if necessary, to take corrective action). After it had done so, it could run another batch of tests to make doubly sure that no mistake had been made. And then it could run another test, and another. The benefits of subsequent tests would be subject to steeply diminishing returns; however, so long as there were no alternative action with a higher expected utility, the agent would keep testing and re-testing (and keep acquiring more resources to enable these tests).

Let us take it on from here.

It is tempting to say that a machine can never halt after achieving its goal because it cannot know with full certainty whether it has achieved its goal; it will continually verify, possibly to increasing degrees of certainty, whether it has achieved its goal, but never halt as such.

What if, from a naive goal G, the machine’s goal were then redefined as “achieve ‘G’ with ‘p’ probability” for some p < 1? It appears this also would not work, given the machine would never be fully certain of being p certain of having achieved G. (and so on...)

Yet one can specify a set of conditions for which a program will terminate, so how is the argument above fallacious?


Solution in ROT13: Va beqre gb unyg fhpu na ntrag qbrfa’g arrq gb *xabj* vg’f c pregnva, vg bayl arrqf gb *or* c pregnva; nf gur pbaqvgvba vf rapbqrq, gur unygvat jvyy or gevttrerq bapr gur ntrag ragref gur fgngr bs c pregnvagl, ertneqyrff bs jurgure vg unf (shyy) xabjyrqtr bs vgf fgngr.