For benefit of future readers, I have a nitpick on technical details.
I could not lower your scheme of outcome pump into the program which would be executed in fact (given a couple of magical functions receive_future_info() and send_info_back()).
This time machine sends in the past only one bit of information, which is called “consistency bit”. This bit is 1, if some initial condition is met. The bit is 0 if the condition is not met, and in that case the receiver, located in the past, tries different initial conditions to satisfy this measure.
For example, a Novikov machine can be used to guess a cipher.
If 0 is returned from the future, a random number generator produces a new number and after that it is tested.
If it is not the correct cipher, 0 returns to the past, and everything starts from the beginning.
But if the cipher is correct, then 1 is sent to the past, and the loop stops.
Given >=3 options (A, B, C...), machine could guess B, receive 0, resample to C, test C to find out it fails, send 0. A had been the correct answer all along, and the timeline is consistent. To patch the protocol, you would want “remember the received bit x, send x back if test succeeds and ¬x otherwise”.
Unfortunately I believe the whole article has same level of quality, despite its interesting ideas.
Thanks, that was actually what EY said in his quote, which I put just below my model—that we should change the bit each time. I somehow missed it (“send back a ‘0’ if a ‘1’ is recorded as having been received, or vice versa—unless some goal state is achieved”).
As I stated in the epistemic status, this article is just a preliminary write-up. I hope more knowledgeable people will write much better models of x-risks from time machines and will be able to point out where avturchin was wrong and explain what the real situation is.
For benefit of future readers, I have a nitpick on technical details.
I could not lower your scheme of outcome pump into the program which would be executed in fact (given a couple of magical functions
receive_future_info()
andsend_info_back()
).Given >=3 options (A, B, C...), machine could guess B, receive 0, resample to C, test C to find out it fails, send 0. A had been the correct answer all along, and the timeline is consistent.
To patch the protocol, you would want “remember the received bit x, send x back if test succeeds and ¬x otherwise”.
Unfortunately I believe the whole article has same level of quality, despite its interesting ideas.
Thanks, that was actually what EY said in his quote, which I put just below my model—that we should change the bit each time. I somehow missed it (“send back a ‘0’ if a ‘1’ is recorded as having been received, or vice versa—unless some goal state is achieved”).
As I stated in the epistemic status, this article is just a preliminary write-up. I hope more knowledgeable people will write much better models of x-risks from time machines and will be able to point out where avturchin was wrong and explain what the real situation is.