Sure, I agree with that. But you see, that’s not what the quote said. It actually not even related to what the quote said, except in very tenuous manners. The quote condemned people complaining about drinks on an airplane; that was the whole point of mentioning the technology at all. I take issue with the quote as stated, not with every somewhat similar-sounding idea.
DSherron
That honestly seems like some kind of fallacy, although I can’t name it. I mean, sure, take joy in the merely real, that’s a good outlook to have; but it’s highly analogous to saying something like “Average quality of life has gone up dramatically over the past few centuries, especially for people in major first world countries. You get 50-90 years of extremely good life—eat generally what you want, think and say anything you want, public education; life is incredibly great. But talk to some people, I absolutely promise you that you will find someone who, in the face of all that incredible achievement, will be willing to complain about [starving kid in Africa|environmental pollution|dying peacefully of old age|generally any way in which the world is suboptimal].”
That kind of outlook not only doesn’t support any kind of progress, or even just utility maximization, it actively paints the very idea of making things even better as presumptuous and evil. It does not serve for something to be merely awe-inspiring; I want more. I want to not just watch a space shuttle launch (which is pretty cool on its own), but also have a drink that tastes better than any other in the world, with all of my best friends around me, while engaged in a thrilling intellectual conversation about strategy or tactics in the best game ever created. While a wizard turns us all into whales for a day. On a spaceship. A really cool spaceship. I don’t just want good; I want the best. And I resent the implication that I’m just ungrateful for what I have. Hell, what would all those people that invested the blood, sweat, and tears to make modern flight possible say if they heard someone suggesting that we should just stick to the status quo because “it’s already pretty good, why try to make it better?” I can guarantee they wouldn’t agree.
If, after realizing an old mistake, you find a way to say “but I was at least sort of right, under my new set of beliefs,” then you are selecting your beliefs badly. Don’t identify as a person who iwas right, or as one who is right; identify as a person who will be right. Discovering a mistake has to be a victory, not a setback. Until you get to this point, there is no point in trying to engage in normal rational debate; instead, engage them on their own grounds until they reach that basic level of rationality.
For people having an otherwise rational debate, they need to at this point drop the Green and Blue labels (any rationalist should be happy to do so, since they’re just a shorthand for the full belief system) and start specifying their actual beliefs. The fact that one identifies as a Green or a Blue is a red flag of glaring irrationality, confirmed if they refuse to drop the label to talk about individual beliefs, in which case do the above. Sticking with the labels is a way to make your beliefs feel stronger, via something like a halo effect where every good thing about Green or Greens gets attributed to every one of your beliefs.
Answered “moderate programmer, incorrect”. I got the correct final answer but had 2 boxes incorrect. Haven’t checked where I went wrong, although I was very surprised I had as back in grade school I got these things correct with near perfection. I learned programming very easily and have traditionally rapidly outpaced my peers, but I’m only just starting professionally and don’t feel like an “experienced” programmer. As for the test, I suspect it will show some distinction but with very many false positives and negatives. There are too many uncovered aspects of what seems to make up a natural programmer. Also, it is tedious as hell, and I suspect that boredom will lead to recklessness will lead to false negatives, which aren’t terrible but are still not good. May also lead to some selection effect.
God f*ing damn it. Again? He has 99.9% accuracy, problem resolved. Every decision remains identical unless a change of 1/1000 in your calculations causes a different action, which in Newcomboid problems it never should.
Note to anyone and everyone who encounters any sort of hypothetical with a “perfect” predictor; if you write it, always state an error rate, and if you read it then assume one (but not one higher than whatever error rate would make a TDT agent choose to two-box.)
Right, I didn’t quite work all the math out precisely, but at least the conclusion was correct. This model is, as you say, exclusively for fatal logic errors; the sorts where the law of non-contradiction doesn’t hold, or something equally unthinkable, such that everything you thought you knew is invalidated. It does not apply in the case of normal math errors for less obvious conclusions (well, it does, but your expected utility given no errors of this class still has to account for errors of other classes, where you can still make other predictions).
That’s not how decision theory works. The bounds on my probabilities don’t actually apply quite like that. When I’m making a decision, I can usefully talk about the expected utility of taking the bet, under the assumption that I have not made an error, and then multiply that by the odds of me not making an error, adding the final result to the expected utility of taking the bet given that I have made an error. This will give me the correct expected utility for taking the bet, and will not result in me taking stupid bets just because of the chance I’ve made a logic error; after all, given that my entire reasoning is wrong, I shouldn’t expect taking the bet to be any better or worse than not taking it. In shorter terms: EU(action) = EU(action & ¬error) + EU(action & error); also EU(action & error) = EU(anyOtherAction & error), meaning that when I compare any 2 actions I get EU(action) - EU(otherAction) = EU(action & ¬error) - EU(otherAction & ¬error). Even though my probability estimates are affected by the presence of an error factor, my decisions are not. On the surface this seems like an argument that the distinction is somehow trivial or pointless; however, the critical difference comes in the fact that while I cannot predict the nature of such an error ahead of time, I can potentially recover from it iff I assign >0 probability to it occurring. Otherwise I will never ever assign it anything other than 0, no matter how much evidence I see. In the incredibly improbable event that I am wrong, given extraordinary amounts of evidence I can be convinced of that fact. And that will cause all of my other probabilities to update, which will cause my decisions to change.
Neat. Consider my objection retracted. Although I suspect someone with more knowledge of the material could give a better explanation.
This comment fails to address the post in any way whatsoever. No claim is made of the “right” thing to do; a hypothetical is offered, and the question asked is “what do you do?” It is not even the case that the hypothetical rests on an idea of an intrinsic “right thing” to do, instead asking us to measure how much we value knowing the truth vs happiness/lifespan, and how much we value the same for others. It’s not an especially interesting or original question, but it does not make any claims which are relevant to your comment.
EDIT: That does make more sense, although I’d never seen that particular example used as “fighting the hypothetical”, more just that “the right thing” is insufficiently defined for that sort of thing. Downvote revoked, but it’s still not exactly on point to me. I also don’t agree that you need to fight the hypothetical this time, other than to get rid of the particular example.
While I don’t entirely think this article was brilliant, it seems to be getting downvoted in excess of what seems appropriate. Not entirely sure why that is, although a bad choice of example probably helped push it along.
To answer the main question: need more information. I mean, it depends on the degree to which the negative effects happen, and the degree to which it seems this new belief will be likely to have major positive impacts on decision-making in various situations. I would, assuming I’m competent and motivated enough, create a secret society which generally kept the secret but spread it to all of the world’s best and brightest, particularly in fields where knowing the secret would be vital to real success. I would also potentially offer a public face of the organization, where the secret is openly offered to any willing to take on the observed penalties in exchange for the observed gains. It could only be given out to those trusted not to tell, of course, but it should still be publicly offered; science needs to know, even if not every scientist needs to know.
Yes, 0 is no more a probability than 1 is. You are correct that I do not assign 100% certainty to the idea that 100% certainty is impossible. The proposition is of precisely that form though, that it is impossible—I would expect to find that it was simply not true at all, rather than expect to see it almost always hold true but sometimes break down. In any case, yes, I would be willing to make many such bets. I would happily accept a bet of one penny, right now, against a source of effectively limitless resources, for one example.
As to what probability you assign; I do not find it in the slightest improbable that you claim 100% certainty in full honesty. I do question, though, whether you would make literally any bet offered to you. Would you take the other side of my bet; having limitless resources, or a FAI, or something, would you be willing to bet losing it in exchange for a value roughly equal to that of a penny right now? In fact, you ought to be willing to risk losing it for no gain—you’d be indifferent on the bet, and you get free signaling from it.
Sure, it sounds pretty reasonable. I mean, it’s an elementary facet of logic, and there’s no way it’s wrong. But, are you really, 100% certain that there is no possible configuration of your brain which would result in you holding that A implies not A, while feeling the exact same subjective feeling of certainty (along with being able to offer logical proofs, such that you feel like it is a trivial truth of logic)? Remember that our brains are not perfect logical computers; they can make mistakes. Trivially, there is some probability of your brain entering into any given state for no good reason at all due to quantum effects. Ridiculously unlikely, but not literally 0. Unless you believe with absolute certainty that it is impossible to have the subjective experience of believing that A implies not A in the same way you currently believe that A implies A, then you can’t say that you are literally 100% certain. You will feel 100% certain, but this is a very different thing than actually literally possessing 100% certainty. Are you certain, 100%, that you’re not brain damaged and wildly misinterpreting the entire field of logic? When you posit certainty, there can be literally no way that you could ever be wrong. Literally none. That’s an insanely hard thing to prove, and subjective experience cannot possibly get you there. You can’t be certain about what experiences are possible, and that puts some amount of uncertainty into literally everything else.
“Exist” is meaningful in the sense that “true” is meaningful, as described in EY’s The Simple Truth. I’m not really sure why anyone cares about saying something with probability 1 though; no matter how carefully you think about it, there’s always the chance that in a few seconds you’ll wake up and realize that even though it seems to make sense now, you were actually spouting gibberish. Your brain is capable of making mistakes while asserting that it cannot possibly be making a mistake, and there is no domain on which this does not hold.
I think that claiming that is just making the confusion worse. Sure, you could claim that our preferences about “moral” situations are different from our other preferences; but the very feeling that makes them seem different at all stems from the core confusion! Think very carefully about why you want to distinguish between these types of preferences. What do you gain, knowing something is a “moral” preference (excluding whatever membership defines the category)? Is there actually a cluster in thing space around moral preferences, which is distinctly separate from the “preferences” cluster? Do moral preferences really have different implications than preferences about shoes and I’ve cream? The only thing I can imagine is that when you phrase an argument to humans in terms of morality, you get different responses than to preferences (“I want Greta’s house” vs “Greta is morally obligated to give me her house”). But I can imagine no other way in which the difference could manifest. I mean, a preference is a preference is a term in a utility function. Mathematically they’d better all work the same way or we’re gonna be in a heap of trouble.
Things encoded in human brains are part of the territory; but this does not mean that anything we imagine is in the territory in any other sense. “Should” is not an operator that has any useful reference in the territory, even within human minds. It is confused, in the moral sense of “should” at least. Telling anyone “you shouldn’t do that” when what you really mean is “I want you to stop doing that” isn’t productive. If they want to do it then they don’t care what they “should” or “shouldn’t” do unless you can explain to them why they in fact do or don’t want to do that thing. In the sense that “should do x” means “on reflection would prefer to do x” it is useful. The farther you move from that, the less useful it becomes.
I’m not a physicist, and I couldn’t give a technical explanation of why that won’t work (although I feel like I can grasp an intuitive idea based on how the Uncertainty Principle works to begin with). However, remember the Litany of a Bright Dilettante. You’re not going to spot a trivial means of bypassing a fundamental theory in a field like physics after thinking for five minutes on a blog.
Incidentally, the Uncertainty Principle doesn’t talk about the precision of our possible measurements, per se, but about the actual amplitude distribution for the observable. As you get arbitrarily precise along one of the pair you get arbitrarily spread out along the other, so that the second value is indeterminate even in principle.
The accusation of being bad concepts was not because they are vague, but because they lead to bad modes of thought (and because they are wrong concepts, in the manner of a wrong question). Being vague doesn’t protect you from being wrong; you can talk all day about “is it ethical to steal this cookie” but you are wasting your time. Either you’re actually referring to specific concepts that have names (will other people perceive of this as ethically justified?) or you’re babbling nonsense. Just use basic consequentialist reasoning and skip the whole ethics part. You gain literally nothing from discussing “is this moral”, unless what you’re really asking is “What are the social consequences” or “will person x think this is immoral” or whatever. It’s a dangerous habit epistemically and serves no instrumental purpose.
This is because people are bad at making decisions, and have not gotten rid of the harmful concept of “should”. The original comment on this topic was claiming that “should” is a bad concept; instead of thinking “I should x” or “I shouldn’t do x”, on top of considering “I want to/don’t want to x”, just look at want/do not want. “I should x” doesn’t help you resolve “do I want to x”, and the second question is the only one that counts.
I think that your idea about morality is simply expressing a part of a framework of many moral systems. That is not a complete view of what morality means to people; it’s simply a part of many instantiations of morality. I agree that such thinking is the cause of many moral conflicts of the nature “I should x but I want to y”, stemming from the idea (perhaps subconscious) that they would tell someone else to x, instead of y, and people prefer not to defect in those situations. Selfishness is seen as a vice, perhaps for evolutionary reasons (see all the data on viable cooperation in the prisoner’s dilemma, etc.) and so people feel the pressure to not cheat the system, even though they want to. This is not behavior that a rational agent should generally want! If you are able to get rid of your concept of “should”, you will be free from that type of trap unless it is in your best interests to remain there.
Our moral intuitions do not exist for good reasons. “Fairness” and it’s ilk are all primarily political tools; moral outrage is a particularly potent tool when directed at your opponent. Just because we have an intuition does not make that intuition meaningful. Go for a week while forcing yourself to taboo “morality”, “should”, and everything like that. When you make a decision, make a concerted effort to ignore the part of your brain saying “you should c because it’s right”, and only listen to your preferences (note: you can have preferences that favor other people!). You should find that your decisions become easier and that you prefer those decisions to any you might have otherwise made. It also helps you to understand that you’re allowed to like yourself more than you like other people.
You’re sneaking in connotations. “Morality” has a much stronger connotation than “things that other people think are bad for me to do.” You can’t simply define the word to mean something convenient, because the connotations won’t go away. Morality is definitely not understood generally to be a social construct. Is that social construct the actual thing many people are in reality imagining when they talk about morality? Quite possibly. But those same people would tend to disagree with you if you made that claim to them; they would say that morality is just doing the right thing, and if society said something different then morality wouldn’t change.
Also, the land ownership analogy has no merit. Ownership exists as an explicit social construct, and I can point you to all sorts of evidence in the territory that shows who owns what. Social constructs about morality exist, but morality is not understood to be defined by those constructs. If I say “x is immoral” then I haven’t actually told you anything about x. In normal usage I’ve told you that I think people in general shouldn’t do x, but you don’t know why I think that unless you know my value system; you shouldn’t draw any conclusions about whether you think people should or shouldn’t x, other than due to the threat of my retaliation.
“Morality” in general is ill-defined, and often intuitions about it are incoherent. We make much, much better decisions by throwing away the entire concept. Saying “x is morally wrong” or “x is morally right” doesn’t have any additional effect on our actions, once we’ve run the best preference algorithms we have over them. Every single bit of information contained in “morally right/wrong” is also contained in our other decision algorithms, often in a more accurate form. It’s not even a useful shorthand; getting a concrete right/wrong value, or even a value along the scale, is not a well-defined operation, and thus the output does not have a consistent effect on our actions.
The ideal attitude for humans with our peculiar mental architecture probably is one of “everything is amazing, also lets make it better” just because of how happiness ties into productivity. But that would be the correct attitude regardless of the actual state of the world. There is no such thing as an “awesome” world state, just a “more awesome” relation between two such states. Our current state is beyond the wildest dreams of some humans, and hell incarnate in comparison to what humanity could achieve. It is a type error to say “this state is awesome;” you have to say “more awesome” or “less awesome” compared to something else.
Also, such behavior is not compatible with the quote. The quote advocates ignoring real suboptimal sections of the world and instead basking how much better the world is than it used to be. How are you supposed to make the drinks better if you’re not even allowed to admit they’re not perfect? I could, with minor caveats, get behind “things are great lets make them better” but that’s not what the quote said. The quote advocates pretending that we’ve already achieved perfection.