No doubt some of the marginal money would be wasted, but that’s always true and is true now. Science is and would be worth it even if the haircut was immense, and I don’t see a reason that the additional spending would be that much more wasted.
Also, the begging scenario you describe isn’t particuarly scary. If giving more money to scientists meant there were more scientists each with the same funding levels we have now, that seems like a perfectly fine outcome. If it meant there were more fundraisers seeking money for science and each raised the same quantity of funds, that also seems like a fine outcome.
I was rather embaressed it took me so long to realize what was going on, at which point I looked at the name again and smiled, but I think this is more than just framing. The most salient thing about magic, and to a lesser extent about things labeled ritual and ceremony is that they are based on false beliefs and flat out do not work, or if they do work they are placebo effects or otherwise not done for any physical effects. The parts where what was being described clearly worked as intended didn’t, upon reversing the frame, seem to change much at all for me.
Because of that, I feel like this passage doesn’t say the same things in a different frame. Instead, it makes additional explicit and implicit claims that completely change the… I was going to say way things should be looked at, but it seems right to say framed. Whether we have good evidence that these ‘rituals’ actually work matters and it matters a lot. It makes me wonder whether the frame primarily follows from those beliefs or if those beliefs primarily follow from the frame, or what this passage would look like if the anthropologist still viewed them in the primitive frame but avoided assuming or implying that what they were doing didn’t work.
Karma scores are doubtless a powerful force, and I have no doubt that they will do exactly what Robin suggests that they will. I don’t think you can have one without the other, unless you find a way to reward karma for external activity. However I strongly agree with khafra that it is very much a happy problem at this stage to have some people a little too concerned with their karma scores. The bias towards doing what can be scored seems likely to work for us in at least the medium term.
This assumes that the debate and possible solution set lies along a straight line, in which case reversed stupidity is very close to intelligence. In situations where this is strictly the case, this method might not be bad, and in markets if you can manage to buy when the idiots sell and sell when the idiots buy (again, along a straight line of possible values) in my experience you end up doing well, if you can figure out which end of the rope is which. JGWeissman, I wouldn’t worry about overcancellation too much because the number of idiots is large and the number of people willing to employ heuristics like this is small.
In most situations of this type the best solutions lie far from the rope and even the smart people have long since given up doing anything but pulling. If that is not possible, and there is no cost to pull on the rope, trying to cancel out the idiots is on average likely to be better than doing nothing, but I certainly wouldn’t think this is a good primary methodology to make decisions.
If everything comes out exactly right, this can make a case for playing the lottery being better than doing nothing risky but it can’t possibly make the case that the lottery isn’t massively worse than other forms of gambling. Even if the numbers games are gone going to a casino offers the same opportunity at far better odds and allows you to choose the point on the curve where gambling stops being efficient. I do think however the point that negative-expectation risks can be rational is well taken.
I live in NYC proper.
You don’t always have to join Zug or Urk. Often you can let them fight it out and remain neutral, or choose how much of your resources to commit to the fight. Urk needs everything you have, whereas Zug would be perfectly happy to see you do nothing and in most conflicts most people stay out of it. Because of this Zug can’t afford to go around rewarding everyone for not joining Urk the same way Urk can reward you for joining him.
We choose our standards because we want to win and they help us do that. Judging others by those standards doesn’t work the same way unless by doing so you can get them to hold themselves to those same standards. Otherwise your standards are serving an altogether different purpose than the ones you impose on yourself, so it makes sense that they are radically different.
I would go an extra step beyond Iogi. You can learn more from smart people, but you can also teach them far easier and more enjoyably. They also often get more out of it. Being able to finally be with someone who can understand what you’re talking about can be a great relief even if and sometimes especially if you’re not learning anything at the moment. This does also have the side benefit of impressing them and/or helping you gain high status, so the two can get intermingled.
There’s also the fact that they also usually want to talk a lot, especially if you give them food for thought, so you have to go a long way before you risk monopolizing the conversation.
I find the example here highly amusing because I think that while you have taken this quirk to an extreme I do a more moderate version of this balancing using the same feature of the same program. I don’t have to have playlists line up exactly but I do keep things in balance. I find that I have a natural bias towards listening to the same tunes too much, with the result being that I don’t do enough exploration and overplay the songs I like the most. A rule like this counteracts this bias, so this could also be an example of not letting the perfect become the enemy of the good.
After reading all the comments and getting a lot more details about Eliezer’s situation and the general responses to SLA, I have a theory:
SLA works by reducing appetite. The majority of the time, if you reduce appetite that causes people to eat less. When they eat less, they usually lose weight.
The problem is that SLA won’t work if that link is broken. If you already weren’t eating when you were hungry, then changing your hunger levels might not change how much you eat, which would result in you eating the same amount or if adding the oil doesn’t reduce that amount you eat slightly more. In Eliezer’s case, he already has so much willpower that he can break the link if he wants to, so SLA didn’t solve the right problem. There are other ways to unlink that don’t involve willpower, as well. For these people, SLA doesn’t work. For another group, the link is severed past X pounds lost so it stops working.
I have a lot of experience with gambling and I do this reguarly. I can verify that in my experience makes you better calibrated. What I’ve had success with is to generate a probability range before I incorporate the market opinion, then use it to generate another. I find the key in practice is not to define a mathematical distribution but to give a mean prediction and a range that you find plausable which should have a probabiility of around 95%. Often the mean is not centered.
That comment was so far outside what I expect to see on this site in general or from Eliezer in particular that I didn’t initially realize that he was being hostile. On any comment board I’ve seen more than a glimpse of outside OB/LW I doubt I make that mistake.
There’s almost always room for improvement in this area but this is one place where I think we do an admirable job. If this were a primary cause, I would expect most forum-based sites to have the same problem, usually far worse than we do. Is there a general gender imbalance in most forums?
I thought that representitive democracy meant that we could make wise decisions about who would be able to make wise decisions, and that part of what we are voting on is how they will make those decisions. Will they listen to our ideas of execution, or only our values, or their own values? Will they rely on our beliefs, their own or experts?
Even at that level, there is no need to believe that democracies make wise decisions to believe in them; you need only believe that the alternatives are worse. At least one of our major parties campaigns reguarly on the idea that we are incapable of making wise decisions in government, yet its rhetoric is still very much pro-Democracy.
Name: Zvi Mowshowitz
Location: New York City
Education: BA, Mathematics
I found OB through Marginal Revolution, which then led to LW. A few here know me from my previous job as a professional Magic: The Gathering player and writer and full member of the Competitive Conspiracy. That job highly rewarded the rationality I already had and encouraged its development, as does my current one which unfortunately I can’t say much about here but which gives me more than enough practical reward to keep me coming back even if I wasn’t fascinated anyway. I’m still trying to figure out what my top level posts are going to be about when I get that far.
While I have told my Magic origin story I don’t have one for rationality or atheism; I can’t remember ever being any other way and I don’t think anyone needs my libertarian one. If anything it took me time to realize that most people didn’t work that way, and how to handle that, which is something I’m still working on and the part of OB/LW I think I’ve gained the most from.
I’d guess that asking why you were downvoted makes other people think about why you were downvoted, and hence think about voting you up or down, which both overrides the previous vote since people vote to move numbers towards where they think they should be and also changes the mix of voters.
Doesn’t the publicity condition allow you to make statements like “If you have the skills to do A then do A, otherwise do B”? Similarly, to solve the case where everyone was just like you, a code can alter itself in the case that publicity cares about: “If X percent of agents are using this code, do Y, otherwise do Z.” It seems sensible to alter your behavior in both cases, even if it feels like dodging the condition.
I think you have to take the 5k. The only way it doesn’t leave everyone better off and save lives is if you don’t actually believe your prior of >99%, in which case update your prior. I don’t see how what he does in another room matters. Any reputational effects are overwhelmed by the ability to save thousands of lives.
However, I also don’t see how you can cooperate in a true one-time prisoner’s dilemma without some form of cheating. The true PD presumes that I don’t care at all about the other side of the matrix, so assuming there isn’t some hidden reason to prefer co-operation—there are no reputational effects personally or generally, no one can read or reconstruct my mind, etc—why not just cover the other side’s payoffs? The payoff looks a lot like this: C → X, D → X+1, where X is unknown.
Also, as a fun side observation, this sounds suspiciously like a test designed to figure out which of us actually thinks we’re >99% once we take into account the other opinion and which of us is only >99% before we take into account the other opinion. Dr. House might be thinking that if we order 15k of either medicine that one is right often enough that his work here is done. I’d have to assign that p>0.01, as it’s actually a less evil option than taking him at face value. But I’m presuming that’s not the case and we can trust the scenario as written.
Seems simple enough to me, too, as my answer yesterday implied. The probability the Earth is that young is close enough to 0 that it doesn’t factor into my utility calculations, so Omega is asking me if I want to save a billion people. Do whatever you have to do to convince him, then save a billion people.
What makes us assume this? I get why in examples where you can see each others’ source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don’t see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don’t see how this corresponds to such efficient lie detection in both directions for both of us.
Obviously if we had a tangible precommitment option that was sufficient when a billion lives were at stake, I would take it. And I agree that if the payoffs were 1 person vs. 2 billion people on both sides, this would be a risk I’d be willing to take. But I don’t see how we can suppose that the correspondance between “he thinks I will choose C if he agrees to choose C, and in fact then chooses C” and “I actually intend to choose C if he agrees to choose C” is not all that high. If the flat Earther in question is the person on whom they based Dr. Cal Lightman I still don’t choose C because I’d feel that even if he believed me he’d probably choose D anyway. Do you think mosthumans are this good at lie detection (I know that I am not), and if so do you have evidence for it?