Unless I’m misunderstanding UDT, isn’t speed another issue? An FAI must know what’s likely to be happening in the near future in order to prioritize its computational resources so they’re handling the most likely problems. You wouldn’t want it churning through the implications of the Loch Ness monster being real while a mega-asteroid is headed for the earth.
Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.
(There are all sorts of exceptions to this principle, and they mostly have to do with “efficient” choices of representation that affect the underlying epistemology. You can view a Bayesian network as efficiently compressing a raw probability distribution, but it can also be seen as committing to an ontology that includes primitive causality.)
Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.
But that path is not viable here. If UDT claims to make decisions independently of any anticipation, then it seems it must be optimal on average over all the impossibilities it’s prepared to compute an output for. That means it must be sacrificing optimality in this world-state (by No Free Lunch), even given infinite computing time, so having a quick approximation doesn’t help.
If an AI running UDT is just as prepared to find Nessie as to find out how to stop the incoming asteroid, it will be inferior to a program designed just to find out how to stop asteroids. Expand the Nessie possibility to improbable world-states, and the asteroid possibility to probable ones, and you see the problem.
Though I freely admit I may be completely lost on this.
Unless I’m misunderstanding UDT, isn’t speed another issue? An FAI must know what’s likely to be happening in the near future in order to prioritize its computational resources so they’re handling the most likely problems. You wouldn’t want it churning through the implications of the Loch Ness monster being real while a mega-asteroid is headed for the earth.
Wei Dai should not be worrying about matters of mere efficiency at this point. First we need to know what to compute via a fast approximation.
(There are all sorts of exceptions to this principle, and they mostly have to do with “efficient” choices of representation that affect the underlying epistemology. You can view a Bayesian network as efficiently compressing a raw probability distribution, but it can also be seen as committing to an ontology that includes primitive causality.)
But that path is not viable here. If UDT claims to make decisions independently of any anticipation, then it seems it must be optimal on average over all the impossibilities it’s prepared to compute an output for. That means it must be sacrificing optimality in this world-state (by No Free Lunch), even given infinite computing time, so having a quick approximation doesn’t help.
If an AI running UDT is just as prepared to find Nessie as to find out how to stop the incoming asteroid, it will be inferior to a program designed just to find out how to stop asteroids. Expand the Nessie possibility to improbable world-states, and the asteroid possibility to probable ones, and you see the problem.
Though I freely admit I may be completely lost on this.