AI engineer
alyssavance
We’re working on it- we’ve gotten the DVDs from the 92nd Street Y and should be processing and uploading them soon.
I think a large part of luck works in the opposite manner, although I don’t think the claim you’re making is the same claim that the Telegraph is making (indeed, I’m not sure if the Telegraph is saying anything specific enough to count as a claim).
Suppose you always do things that have a 95% chance of working, but don’t have a high payoff. You apply to safety schools for college, you only date people you already know really well, you get a safe, steady job that pays a respectable salary, without much danger of getting fired or outsourced. You’ll probably think, “Wow! 95% of the stuff I try works! I must be really lucky!”
Now, suppose that you always do things that have a 5% chance of working, but do have a high payoff. Say, you try to start five different companies, and all of them fail. You’ll probably think, “Wow, I must be unlucky, nothing I try works”, and it’s quite possible you’ll just give up, even if one success would make you worth $100M.
I think so. The traditional definition of an “illness”, I think, is something that would cause you pain even if you were stuck on a desert island. Eg., even if you were stranded in the middle of nowhere, you still wouldn’t want to get the flu. The point of the post is that the word “illness” is gradually being redefined more broadly, to “any physical/mental characteristic that society views as negative”.
Eg., if I were 4′10“, and stuck on a desert island, would it bother me to be 4′10” instead of 5′10“? I doubt it, unless it comes along with some sort of physical deformity; that’s only a difference of 17%. Yet, if I were 4′10” now, it would probably have substantial negative effects, like earning less, and being considered generally less desirable in dating.
Cancer, heart disease, stroke, and diabetes affect way more than 1.2% of the population, and no one has ever had any trouble defining them as illnesses.
Not being immortal (in the sense of dying from old age) is obviously an illness, but hasn’t been recognized as such by most outside the transhumanist community, because it’s universal. It would be in a sane society, but there you go.
Nutrient deficiency of various sorts has always been recognized as an illness (eg., scurvy for lack of vitamin C), and this has since been expanded to include general starvation (ICD-10 code T73.0).
Lack of video games is a fact about the video games, not a fact about your body.
You’re right that death isn’t a disease, it’s an effect of disease. But aging itself is clearly a disease. When you get old, it’s not like you’re perfectly fine until you’re age 80, and then you get struck down by a random sickness. The body itself degrades over time and loses various functions, like Lou Gehrig’s disease.
“To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical.”
That’s not the way expected utility works. Utility is simply a way of assigning numbers to our preferences; states with bigger numbers are better than states with smaller numbers by definition. If outcome A has six billion plus a few utilons, and outcome B has six billion plus a few utilons, then, under whichever utility function we’re using, we are indifferent between A and B by definition. If we are not indifferent between A and B, then we must be using a different utility function.
To take one example, suppose we were faced with the choice between A, giving one dollar’s worth of goods to every person in the world, or B, taking one dollar’s worth of goods from every person in the world, and handing thirteen billion dollar’s worth of goods to one randomly chosen person. The amount of goods in the world is the same in both cases. However, if I prefer A to B, then U(A) must be larger than U(B), as this is just a different way of saying the exact same thing.
Now, if each person has a different utility function, and we must find a way to aggregate them, that is indeed an interesting problem. However, in that case, one must be careful to refer to the utility function of persons A, B, C, etc., rather than just saying “utility”, as this is an exceedingly easy way to get confused.
You are right that utility does not sum linearly, but there are much less confusing ways of demonstrating this. Eg., the law of decreasing marginal utility: one million dollars is not a million times as useful as one dollar, if you are an average middle-class American, because you start to run out of high-utility-to-cost-ratio things to buy.
If you hold lottery A once, and it has utility B, that does not imply that if you hold lottery A X times, it must have a total utility of X times B. In most cases, if you want to perform X lotteries such that every lottery has the same utility, you will have to perform X different lotteries, because each lottery changes the initial conditions for the subsequent lottery. Eg., if I randomly give some person a million dollar’s worth of stuff, this probably has some utility Q. However, if I hold the lottery a second time, it no longer has utility Q; it now has utility Q—epsilon, because there’s slightly more stuff in the world, so adding a fixed amount of stuff matters less. If I want another lottery with utility Q, I must give away slightly more stuff the second time, and even more stuff the third time, and so on and so forth.
If you define your utility function such that each lottery has identical utility, then sure. However, your utility function also includes preferences based on fairness. If you think that a one-billionth chance of doing lottery A a billion times is better than doing lottery A once on grounds of fairness, then your utility function must assign a different utility to lottery #658,168,192 than lottery #1. You cannot simultaneously say that the two are equivalent in terms of utility and that one is preferable to the other on grounds of X; that is like trying to make A = 3 and A = 4 at the same time.
Yes, fixed.
For the first: You can only write numbers up to 10.
For the second: Yes, that’s the point, and that’s not what determinism means; determinism just means that there is no randomness involved. Relative distances between preferences matter! Suppose the vote was between option A, you get a thousand dollars, option B, you get nothing, and option C, your country gets destroyed in nuclear war. You would want a way to express that you dislike C much, much more than you like A.
“The procedure you’re proposing collapses into approval voting immediately.”
This only holds when you already know the outcome of every other vote, and not otherwise. (Of course, in the real world, you don’t normally know the outcome of every other vote). Suppose, for instance, that you have three possible outcomes, A, B and C, where A is “you win a thousand dollars”, B is “you get nothing”, and C is “you lose a thousand dollars”. Suppose that (as a simple case) you know that there’s a 50% chance of the other scores being (92, 93, 90) and a 50% chance of the other scores being (88, 92, 99).
If you vote 10, 0, 0, then 50% of the time you win a thousand dollars, and 50% of the time you lose a thousand dollars, for a net expected value of $0. If you vote 10, 10, 0, you always get nothing, for a net expected value of $0. If you vote 10, 8, 0, however, you win $1000 50% of the time and get nothing 50% of the time, for a total expected value of $500.
If you want to use the word “determinism” in that sense, then a far better definition would be “the voting outcome is not affected by anything other than the votes of the voters”, which my system does hold to. As I said above, I haven’t claimed to found a flaw in the mathematical proof of Arrow’s Theorem, just a mismatch between the content of the theorem and how voting systems work in real life. Certainly, in real life, we should want to distinguish between “a vote between the Democrats, the Greens, and the Republicans”, and “a vote between the Democrats, the Greens, and superintelligent UFAI”, even if our preference order is the same in both cases.
That’s true, in the limit as the number of voters goes to infinity, if you only care about which preference is ranked the highest. However, there are numerous circumstances where this isn’t the case.
The Gibbard-Satterthwaite theorem is like Arrow’s theorem, in that it only applies to voting systems which work solely based on preference ordering. Under my system, there’s no incentive for “tactical voting”, in the sense of giving a higher score to a candidate who you think is worse; a candidate can only do better if they’re ranked more highly.
You can just choose the first one in alphabetical order, or some equivalent.
Okay, then, we all agree before the vote to break ties based on some arbitrary binary digit of pi, which we don’t look at until after the vote is done.
That’s not what independence of irrelevant alternatives means; it means that, given the same relative rankings, the voting algorithm will always make the same decision, with or without the alternative. It doesn’t mean that the voters will make the same decisions.
I was thinking that Yale students might want to attend and Yorkside is right across the street from campus.