This looks like a powerful heuristic for resolving environmental uncertainty; I don’t think it’s relevant to the study of logical uncertainty in particular. (Correct me if I’m wrong, though.)
What I do not understand is the difference between a Rube Goldberg machine inside a box and cards dealt to the other player. Why is one “logical” and the other “environmental” uncertainty?
Environmental uncertainty occurs when you don’t know which Rube Goldberg machine is in the box. Logical uncertainty occurs when you can’t deduce how a specific Rube Goldberg machine behaves (for lack of resources, not for lack of understanding of the rules).
In poker, not knowing which cards the opponent has is environmental uncertainty, and not knowing which hands they can make from those cards (because you lack computation, despite knowing the rules) would be logical uncertainty.
I am aware of it, but I haven’t read it. Have you read it, and do you think it’s worthwhile? (Note that I have read some of Russell’s other recent work on logical uncertainty, such as this short paper.)
It’s actually Stuart Russell putting out some work his very smart friend Eric Wefald did (Eric tragically lost his life in I think a car accident before the book went to print). It is a bit dated, some of the value is in the keyword “limited rationality” (which is how some people think about what you call logical uncertainty).
I remember Stuart talking about limited rationality back when I was an undergrad (so this idea was on people’s radar for quite a long while).
This looks like a powerful heuristic for resolving environmental uncertainty; I don’t think it’s relevant to the study of logical uncertainty in particular. (Correct me if I’m wrong, though.)
What I do not understand is the difference between a Rube Goldberg machine inside a box and cards dealt to the other player. Why is one “logical” and the other “environmental” uncertainty?
Environmental uncertainty occurs when you don’t know which Rube Goldberg machine is in the box. Logical uncertainty occurs when you can’t deduce how a specific Rube Goldberg machine behaves (for lack of resources, not for lack of understanding of the rules).
In poker, not knowing which cards the opponent has is environmental uncertainty, and not knowing which hands they can make from those cards (because you lack computation, despite knowing the rules) would be logical uncertainty.
Have you seen this book:
http://www.amazon.co.uk/Right-Thing-Rationality-Artificial-Intelligence/dp/026251382X
I am aware of it, but I haven’t read it. Have you read it, and do you think it’s worthwhile? (Note that I have read some of Russell’s other recent work on logical uncertainty, such as this short paper.)
It’s actually Stuart Russell putting out some work his very smart friend Eric Wefald did (Eric tragically lost his life in I think a car accident before the book went to print). It is a bit dated, some of the value is in the keyword “limited rationality” (which is how some people think about what you call logical uncertainty).
I remember Stuart talking about limited rationality back when I was an undergrad (so this idea was on people’s radar for quite a long while).