# Dacyn(David Simmons)

Karma: 483
• OK, that’s fair, I should have written down the precise formula rather than an approximation. My point though is that your statement

the expected value of X happening can be high when it happens a little (because you probably get the good effects and not the bad effects Y)

is wrong because a low probability of large bad effects can swamp a high probability of small good effects in expected value calculations.

• 5 Nov 2023 15:29 UTC
1 point
−2

Yeah, but the expected value would still be .

• I don’t see why you say Sequential Proportional Approval Voting gives little incentive for strategic voting. If I am confident a candidate I support is going to be elected in the first round, it’s in my interest not to vote for them so that my votes for other candidates I support will count for more. Of course, if a lot of people think like this then a popular candidate could actually lose, so there is a bit of a brinksmanship dynamic going on here. I don’t think that is a good thing.

• The definition of a derivative seems wrong. For example, suppose that for rational but for irrational . Then is not differentiable anywhere, but according to your definition it would have a derivative of 0 everywhere (since could be an infinitesimal consisting of a sequence of only rational numbers).

• But if they are linearly independent, then they evolve independently, which means that any one of them, alone, could have been the whole thing—so why would we need to postulate the other worlds? And anyway, aren’t the worlds supposed to be interacting?

Can’t this be answered by an appeal to the fact that the initial state of the universe is supposed to be low-entropy? The wavefunction corresponding to one of the worlds, run back in time to the start of the universe, would have higher entropy than the wavefunction corresponding to all of them together, so it’s not as good a candidate for the starting wavefunction of the universe.

• No, the whole premise of the face-reading scenario is that the agent can tell that his face is being read, and that’s why he pays the money. If the agent can’t tell whether his face is being read, then his correct action (under FDT) is to pay the money if and only if (probability of being read) times (utility of returning to civilization) is greater than (utility of the money). Now, if this condition holds but in fact the driver can’t read faces, then FDT does pay the $50, but this is just because it got unlucky, and we shouldn’t hold that against it. • In your new dilemma, FDT does not say to pay the$50. It only says to pay when the driver’s decision of whether or not to take you to the city depends on what you are planning to do when you get to the city. Which isn’t true in your setup, since you assume the driver can’t read faces.

• a random letter contains about 7.8 (bits of information)

This is wrong, a random letter contains log(26)/​log(2) = 4.7 bits of information.

• I have tinnitus every time I think about the question of whether I have tinnitus. So do I have tinnitus all the time, or only the times when I notice?

• I was confused at first what you meant by “1 is true” because when you copied the post from your blog you didn’t copy the numbering of the claims. You should probably fix that.

• The number 99 isn’t unique—this works with any payoff between 30 and 100.

Actually, it only works with payoffs below 99.3 -- this is the payoff you get by setting the dial to 30 every round while everyone else sets their dials to 100, so any Nash equilibrium must beat that. This was mentioned in jessicata’s original post.

Incidentally, this feature prevents the example from being a subgame perfect Nash equilibrium—once someone defects by setting the dial to 30, there’s no incentive to “punish” them for it, and any attempt to create such an incentive via a “punish non-punishers” rule would run into the trouble that punishment is only effective up to the 99.3 limit.

• It’s part of the “frontpage comment guidelines” that show up every time you make a comment. They don’t appear on GreaterWrong though, which is why I guess you can’t see them...

• I explained the problem with the votes-per-dollar formula in my first post. 45% of the vote /​ $1 >> 55% of the vote /​$2, so it is not worth it for a candidate to spend money even if they can buy 10% of the vote for $1 (which is absurdly unrealistically high). When I said maybe a formula would help, I meant a formula to explain what you mean by “coefficient” or “effective exchange rate”. The formula “votes /​ dollars spent” doesn’t have a coefficient in it. If one candidate gets 200 votes and spends 200 dollars, and candidate 2 gets 201 votes and spends two MILLION dollars, who has the strongest mandate, in the sense that the representative actually represents the will of the people when wealth differences are ignored? Sure, and my proposal of Votes /​ (10X + Y) would imply that the first candidate wins. • I don’t think the data dependency is a serious problem, all we need is a very loose estimate. I don’t know what you mean by a “spending barrier” or by “effective exchange rate”, and I still don’t know what coefficient you are talking about. Maybe it would help if you wrote down some formulas to explain what you mean. • I don’t understand what you mean; multiplying the numerator by a coefficient wouldn’t change the analysis. I think if you wanted to have a formula that was somewhat sensitive to campaign spending but didn’t rule out campaign spending completely as a strategy, Votes/​(10X+Y) might work, where Y is the amount spent of campaign spending, and X is an estimate of average campaign spending. (The factor of 10 is because campaign spending just isn’t that large a factor to how many votes you get in absolute terms; it’s easy to get maybe 45% of the vote with no campaign spending at all, just by having (D) or (R) in front of your name.) • The result of this will be that no one will spend more than the$1 minimum. It’s just not worth it. So your proposal is basically equivalent to illegalizing campaign spending.

• 27 Mar 2023 19:43 UTC
6 points
4