I’d assumed what I posted was the LW meditator consensus, or at least compatible with it.
In prediction markets, cost of capital to do trades is a major distorting factor, as are fees and taxes and other physical costs, and participants are much less certain of correct prices and much more worried about impact and how many others are in the same trade. Most everyone who is looking to correct inefficiencies will only fade very large and very obvious inefficiencies, given all the costs.
https://blog.rossry.net/predictit/ has a really good discussion of how this works, with some associated numbers that show how you will probably outright lose money on even apparently ironclad trades like the 112-total candidates above.
I’m sorry, I didn’t understand that. Yes, this answers my objection (although it might cause other problems like make me less likely to answer “sorry, I can’t do that” compared to just ghosting someone)
I think it’s great that you’re trying this and I hope it succeeds.
But I won’t be using it. For me, the biggest problem is lowering the sense of obligation I feel to answer other people’s emails. Without a sense of obligation, there’s no problem—I just delete it and move on. But part of me feels like I’m incurring a social cost by doing this, so it’s harder than it sounds.
I feel like using a service like this would make the problem worse, not better. It would make me feel a strong sense of obligation to answer someone’s email if they had paid $5 to send it. What sort of monster deletes an email they know the other person had to pay money to send?
In the same way, I would feel nervous sending someone else a paid email, because I would feel like I was imposing a stronger sense of obligation on them to respond to my request, rather than it being a harmless ask they can either answer or not. This would be true regardless of how important my email was. Meanwhile, people who don’t care about other people’s feelings won’t really be held back, since $5 is not a lot of money for most people in this community.
I think the increased obligation would dominate any tendency for me to get less emails, and make this a net negative in my case. I still hope other people try it and report back.
What would you recommend to people who are doing this (or to people who aren’t sure if they’re doing it or not?)
I’m a little confused, and I think it might be because you’re using “conflict theorist” different from how I do.For me, a conflict theorist is someone who thinks the main driver of disagreement is self-interest rather than honest mistakes. There can be mistake theorists and conflict theorists on both sides of the “is billionaire philanthropy good?” question, and on the “are individual actions acceptable even though they’re nondemocratic?” question.It sounds like you’re using it differently, so I want to make sure I know exactly what you mean before replying.
You say you’ve given up understanding the number of basically people who disagree with things you think are obvious and morally obligatory. I suspect there’s a big confusion about what ‘basically good’ means here, I’m making a note of it for future posting, but moving past that for now: When you examine specific cases of such disagreements happening, what do you find how often? (I keep writing possible things, but on reflection avoiding anchoring you is better)
I think I usually find we’re working off different paradigms, in the really strong Kuhnian sense of paradigm.
Rob Reich is a former board member of GiveWell and Good Ventures (i.e. Moskowitz and Tuna) and the people at OpenPhil seem to have a huge amount of respect for him. He responded to my article by tweeting “Really grateful to have my writing taken seriously by someone whose blog I’ve long enjoyed and learned from” and promising to write a reply soon.
Dylan Matthews, who wrote the Vox article I linked (I don’t know if he is against billionaire philanthropy, but he seems to hold some sympathy for the position), self-describes as EA, has donated a kidney, and switched from opposing work on AI risk to supporting it after reading arguments on the topic.
And here’s someone on the subreddit saying that they previously had some sympathy for anti-billionaire-philanthropy arguments but are now more convinced that it’s net positive.
I don’t think any of these people fit your description of “people opposed to nerds or to thinking”, “people opposed to all private actions not under ‘democratic control’”, or “people opposed to action of any kind.” They seem like basically good people who I disagree with. I am constantly surprised by how many things that seem obvious and morally obligatory to me can have basically good people disagree with them, and I have kind of given up on trying to understand it, but there we go.
Even if there are much worse people in the movement, I think getting Reich and Matthews alone to dial it down 10% would be very net positive, since they’re among the most prominent opponents.
I was concerned about backlash and ran the post by a couple of people I trusted to see if they thought it was net positive, and they all said it was. If you want I’ll run future posts I have those concerns about by you too.
Instead of Quinn admitting lying is sometimes good, I wish he had said something like:
“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they’re wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP is dishonest’. After this doesn’t work and lots of other people confirm your intuitions of distrust, then your confidence reaches 90% and you start doing things more like shouting ‘PADP is dishonest’ from the rooftops.
Or maybe you’ll never reach 90% confidence. Many people think that climate science is dishonest. I don’t doubt many of them are reporting their beliefs honestly—that they’ve done a deep investigation and that’s what they’ve concluded. It’s just that they’re not smart, informed, or rational enough to understand what’s going on, or to process it in an unbiased way. What advice would you give these people about calling scientists out on dishonesty—again given that rumors are powerful things and can ruin important work? My advice to them would be to consider that they may be overconfident, and that there needs to be some intermediate ‘consider my own limitations and the consequences of my irreversible actions’ step in between ‘this looks dishonest to me’ and ‘I will publicly declare it dishonest’. And that step is going to look like an appeal to consequences, especially if the climate deniers are so caught up in their own biases that they can’t imagine they might be wrong.
I don’t want to deny that calling out apparent dishonesty when you’re pretty sure of it, or when you’ve gone through every effort you can to check it and it still seems bad, will sometimes (maybe usually) be the best course, but I don’t think it’s as simple as you think.”
...and seen what Carter answered.
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
1. For reasons discussed on comments to previous posts here, I’m wary of using words like “lie” or “scam” to mean “honest reporting of unconsciously biased reasoning”. If I criticized this post by calling you a liar trying to scam us, and then backed down to “I’m sure you believe this, but you probably have some bias, just like all of us”, I expect you would be offended. But I feel like you’re making this equivocation throughout this post.
2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you’re criticizing in this post are AI professors. Unless you got your timelines from industry, which I don’t think many people here did, them being stupid isn’t especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they’re wrong doesn’t change anything.3. I’ve frequently heard people who believe AI might be near say that their side can’t publicly voice their opinions, because they’ll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it’s natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.4. I’m not familiar with the politics in AI research. But in medicine, I’ve noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they’re usually wrong, people in the field know they’re wrong, and they’re trying to bamboozle a reading public who aren’t smart enough to figure out that they’re wrong (but who are hungry for a “Galileo stands up to hidebound medical establishment” narrative). Number two, there’s a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment’s lie. You can always get a New Yorker story by writing “Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?” Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like “You’re not presenting an alternative solution” in these cases, they mean something like “You don’t have some alternate way of treating diseases that has no side effects, so stop pretending you’re Galileo for pointing out a problem everyone was already aware of.” See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don’t know for sure that this is what’s going on in AI, but it would make sense.I’m not against modeling sociopolitical dynamics. But I think you’re doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they’ve had their intended shocking effect).I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they’re less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics “The other side are liars who are silencing my side!” level, we’re just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.
I’ve actually been thinking about this for a while, here’s a very rough draft outline of what I’ve got:
1. Which questions are important? a. How should we practice cause prioritization in effective altruism? b. How should we think about long shots at very large effects? (Pascal’s Mugging) c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life? d. How do we identify gaps in our knowledge that might be wrong and need further evaluation? e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?2. How do we determine whether we are operating in the right paradigm? a. What are paradigms? Are they useful to think about? b. If we were using the wrong paradigm, how would we know? How could we change it? c. How do we learn new paradigms well enough to judge them at all? 3. How do we determine what the possible hypotheses are? a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this? b. Are there surprising techniques that can help us with this problem?4. Which of the possible hypotheses is true? a. How do we make accurate predictions? b. How do we calibrate our probabilities? 5. How do we balance our explicit reasoning vs. that of other people and society? a. Inside vs. outside view? b. How do we identify experts? How much should we trust them? c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition? d. How much should the replication crisis affect our trust in science? e. How well does good judgment travel across domains? 6. How do we go from accurate beliefs to accurate aliefs and effective action? a. Akrasia and procrastination b. Do different parts of the brain have different agendas? How can they all get on the same page?7. How do we create an internal environment conducive to getting these questions right? a. Do strong emotions help or hinder rationality? b. Do meditation and related practices help or hinder rationality? c. Do psychedelic drugs help or hinder rationality?8. How do we create a community conducive to getting these questions right? a. Is having “a rationalist community” useful? b. How do strong communities arise and maintain themselves? c. Should a community be organically grown or carefully structured? d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission? e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish? f. …without also admitting every homeopath who wants to convince us that “homeopathy is rational”? g. How do we balance the need for a strong community hub with the need for strong communities on the rim? h. Can these problems be solved by having many overlapping communities with slightly different standards?9. How does this community maintain its existence in the face of outside pressure?
I don’t think it’s necessarily greed.
Your doctor may be on a system where they are responsible for doing work for you (e.g. refilling your prescriptions, doing whatever insurance paperwork it takes to make your prescriptions go through, keeping track of when you need to get certain tests, etc) without receiving any compensation except when you come in for office visits. One patient like this isn’t so bad. Half your caseload like this means potentially hours of unpaid labor every day. Even if an individual doctor is willing to do this, high-level decision-makers like clinics and hospitals will realize this is a bad deal, make policies to avoid it, and pressure individual doctors to conform to the policies.
Also, your doctor remains very legally liable for anything bad that happens to you while you’re technically under their care, even if you never see them. If you’re very confused and injecting your insulin into your toenails every day, and then you get hyperglycemic, and your doctor never catches this because you never come into the office, you could sue them. So first of all, that means they’re carrying a legal risk for a patient they’re not getting any money from. And second of all, at the trial, your lawyer will ask “How often did you see so-and-so?” and the doctor will say “I haven’t seen them in years, I just kept refilling their prescription without asking any questions because they sent me an email saying I should”. And then they will lose, because being seen every three months is standard of care. Again, even if an individual doctor is overly altruistic and willing to accept this risk, high-level savvier entities like clinics and hospitals will institute and enforce policies against it. The clinic I work at automatically closes your chart and sends you a letter saying you are no longer our patient if you haven’t seen us in X months (I can’t remember what X is off the top of my head). This sounds harsh, but if we didn’t do it, then if you ever got sick after having seen us even once, it would legally be our fault. Every lawyer in the world agrees you should do this, it’s not some particular doctor being a jerk.
Also, a lot of people really do need scheduled appointments. You would be shocked how many people get much worse, are on death’s door, and I only see them when their scheduled three-monthly appointment rolls around, and I ask them “Why didn’t you come in earlier?!” and they just say something like they didn’t want to bother me, or didn’t realize it was so bad, or some other excuse I can’t possibly fathom (to be fair, many of these people are depressed or psychotic). This real medical necessity meshes with (more cynically provides a fig leaf for, but it’s not a fake fig leaf) the financial and legal necessity.
I’m not trying to justify what your doctor did to you. If it were me, I would have refilled your insulin, then sent you a message saying in the future I needed to see you every three months. But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!” If my receptionist tries to call, they’ll hang up. At some point I start feeling like I’m being held hostage. I really only have one patient who is definitely doing this, but it’s enough that I can understand why some doctors don’t want to have to have this fight and institute a stricter “no refill until appointment is on the books” policy.
I do think there are problems with the system, but they’re more like:
- A legal system that keeps all doctors perpetually afraid of malpractice if they’re not doing this (but what is the alternative?)
- An insurance system that doesn’t let doctors get money except through appointments. If the doctor just charged you a flat fee per year for being their patient, that would remove the financial aspect of the problem. Some concierge doctors do this, but insurances don’t work that way (but insurances are pretty savvy, are they afraid doctors would cheat?)
- The whole idea that you can’t access life-saving medication until an official gives you permission (but what would be the effects of making potentially dangerous medications freely available?)
I showed it that way because it made more sense to me. But if you want, see https://docs.google.com/spreadsheets/d/1xEkh4jhUup0qlG6EzBct6igvLPeRH4avpM5nZQ-dgek/edit#gid=478995971 for a graph by Paul where the horizontal axis is log(GDP); it is year-agnostic and shows the same pattern.
You may be interested in “Behavior: The Control Of Perception” by Will Powers, which has been discussed here a few times.