Folks, please write at least short reviews on technical articles: if someone parsed the math, whether it appears sensible, whether the message appears interesting, and what exactly this message consists in. Also, this article lacks references: is the stuff it describes standard, how does it relate to the field?
The result is my own work, but the reasoning is not particularly complex, and might well have been done before.
It’s kind of a poor man’s version of the central limit theorem, for differing distributions.
By this I mean that it’s known that if you take the mean of identical independent distributions, it will tend to a narrow spike as the number of distributions increase. This post shows that similar things happen with non-identical distributions, if we bound the variances.
And please do point out any errors that anyone finds!
The math looks valid—I believe the content is original to Stuart_Armstrong, attempting to show a novel set of preferences which imply expected-value calculation in (suficiently) iterated cases but not in isolated cases.
Edit: For example, an agent whose decision-making criteria satisfy Stuart_Armstrong’s criteria might refuse to bet $1 for a 50% chance of winning $2.50 and 50% chance of losing his initial dollar if it were a one-off gamble, but would be willing to make 50 such bets in a row if the odds of winning each were independent. In both cases, the expected value is positive, but only in the latter case is the probable variation from the expected value small enough to overcome the risk aversion.
This article had an interesting title so I scanned it—but it lacked an abstract, a
conclusion, had lots of maths in it—and I haven’t liked most of Stuart’s other articles—so I gave up on it early.
The article attempts to show that you don’t need the independence axiom to justify using expected utility. So I replaced the independence axiom with another axiom that basically says that very thin distribution is pretty much the same as a guaranteed return.
Then I showed that if you had a lot of “reasonable” lotteries and put them together, you should behave approximately according to expected utility.
There’s a lot of maths in it because the result is novel, and therefore has to be firmly justified. I hope to explore non-independent lotteries in future posts, so the foundations need to be solid.
Folks, please write at least short reviews on technical articles: if someone parsed the math, whether it appears sensible, whether the message appears interesting, and what exactly this message consists in. Also, this article lacks references: is the stuff it describes standard, how does it relate to the field?
The result is my own work, but the reasoning is not particularly complex, and might well have been done before.
It’s kind of a poor man’s version of the central limit theorem, for differing distributions.
By this I mean that it’s known that if you take the mean of identical independent distributions, it will tend to a narrow spike as the number of distributions increase. This post shows that similar things happen with non-identical distributions, if we bound the variances.
And please do point out any errors that anyone finds!
The math looks valid—I believe the content is original to Stuart_Armstrong, attempting to show a novel set of preferences which imply expected-value calculation in (suficiently) iterated cases but not in isolated cases.
Edit: For example, an agent whose decision-making criteria satisfy Stuart_Armstrong’s criteria might refuse to bet $1 for a 50% chance of winning $2.50 and 50% chance of losing his initial dollar if it were a one-off gamble, but would be willing to make 50 such bets in a row if the odds of winning each were independent. In both cases, the expected value is positive, but only in the latter case is the probable variation from the expected value small enough to overcome the risk aversion.
This article had an interesting title so I scanned it—but it lacked an abstract, a conclusion, had lots of maths in it—and I haven’t liked most of Stuart’s other articles—so I gave up on it early.
The article attempts to show that you don’t need the independence axiom to justify using expected utility. So I replaced the independence axiom with another axiom that basically says that very thin distribution is pretty much the same as a guaranteed return.
Then I showed that if you had a lot of “reasonable” lotteries and put them together, you should behave approximately according to expected utility.
There’s a lot of maths in it because the result is novel, and therefore has to be firmly justified. I hope to explore non-independent lotteries in future posts, so the foundations need to be solid.