Is there anything here that requires TDT in particular? It seems more like an application of Pascal’s Wager to self-modifying decision agents in general.
Anyway, the part I’d dispute is ”...and the first ten milliseconds caused it to believe in the Christian god”. How is that going to happen? What would convince a self-modifying decision agent, in ten milliseconds, that Christianity is correct and requires absolute certainty, and that it itself has a soul that it needs to worry about, with high enough probability that it actually self-modifies accordingly? (The only thing I can think of is deliberately presenting faked evidence to an AI that has been designed to accept that kind of evidence… which is altogether too intentional a failure to be blamed on the AI.)
The point of the ten milliseconds is that the AI doesn’t know much yet.
Yes, you have a point. I’m pretty much answered.
My main point is that if Christianity has 50% certainty, the ratinal decision is to modify yourself to view it with 100% certainty. Take it as Pascal’s wager, but far more specific.
And yeah. It doesn’t need TDT, on second thoughts. However, that’s the first place I really thought about self modifying decision agents.
Christianity would not be assigned 50% probability, even in total ignorance; 50% is not the right ignorance prior for any event more complicated than a coin flip. An AI sane enough to learn much of anything would have to assign it a prior based on an estimation of its complexity. (See also: the technical explanation of Occam’s Razor.)
Is there anything here that requires TDT in particular? It seems more like an application of Pascal’s Wager to self-modifying decision agents in general.
Anyway, the part I’d dispute is ”...and the first ten milliseconds caused it to believe in the Christian god”. How is that going to happen? What would convince a self-modifying decision agent, in ten milliseconds, that Christianity is correct and requires absolute certainty, and that it itself has a soul that it needs to worry about, with high enough probability that it actually self-modifies accordingly? (The only thing I can think of is deliberately presenting faked evidence to an AI that has been designed to accept that kind of evidence… which is altogether too intentional a failure to be blamed on the AI.)
The point of the ten milliseconds is that the AI doesn’t know much yet.
Yes, you have a point. I’m pretty much answered.
My main point is that if Christianity has 50% certainty, the ratinal decision is to modify yourself to view it with 100% certainty. Take it as Pascal’s wager, but far more specific.
And yeah. It doesn’t need TDT, on second thoughts. However, that’s the first place I really thought about self modifying decision agents.
Christianity would not be assigned 50% probability, even in total ignorance; 50% is not the right ignorance prior for any event more complicated than a coin flip. An AI sane enough to learn much of anything would have to assign it a prior based on an estimation of its complexity. (See also: the technical explanation of Occam’s Razor.)