Instead of Quinn admitting lying is sometimes good, I wish he had said something like:
“PADP is widely considered a good charity by smart people who we trust. So we have a prior on it being good. You’ve discovered some apparent evidence that it’s bad. So now we have to combine the prior and the evidence, and we end up with some percent confidence that they’re bad.
If this is 90% confidence they’re bad, go ahead. What if it’s more like 55%? What’s the right action to take if you’re 55% sure a charity is incompetent and dishonest (but 45% chance you misinterpreted the evidence)? Should you call them out on it? That’s good in the world where you’re right, but might disproportionately tarnish their reputation in the world where they’re wrong. It seems like if you’re 55% sure, you have a tough call. You might want to try something like bringing up your concerns privately with close friends and only going public if they share your opinion, or asking the charity first and only going public if they can’t explain themselves. Or you might want to try bringing up your concerns in a nonconfrontational way, more like ‘Can anyone figure out what’s going on with PADP’s math?’ rather than ‘PADP is dishonest’. After this doesn’t work and lots of other people confirm your intuitions of distrust, then your confidence reaches 90% and you start doing things more like shouting ‘PADP is dishonest’ from the rooftops.
Or maybe you’ll never reach 90% confidence. Many people think that climate science is dishonest. I don’t doubt many of them are reporting their beliefs honestly—that they’ve done a deep investigation and that’s what they’ve concluded. It’s just that they’re not smart, informed, or rational enough to understand what’s going on, or to process it in an unbiased way. What advice would you give these people about calling scientists out on dishonesty—again given that rumors are powerful things and can ruin important work? My advice to them would be to consider that they may be overconfident, and that there needs to be some intermediate ‘consider my own limitations and the consequences of my irreversible actions’ step in between ‘this looks dishonest to me’ and ‘I will publicly declare it dishonest’. And that step is going to look like an appeal to consequences, especially if the climate deniers are so caught up in their own biases that they can’t imagine they might be wrong.
I don’t want to deny that calling out apparent dishonesty when you’re pretty sure of it, or when you’ve gone through every effort you can to check it and it still seems bad, will sometimes (maybe usually) be the best course, but I don’t think it’s as simple as you think.”
...and seen what Carter answered.
1. It sounds like we have a pretty deep disagreement here, so I’ll write an SSC post explaining my opinion in depth sometime.
2. Sorry, it seems I misunderstood you. What did you mean by mentioning business’s very short timelines and all of the biases that might make them have those?
3. I feel like this is dismissing the magnitude of the problem. Suppose I said that the Democratic Party was a lying scam that was duping Americans into believing it, because many Americans were biased to support the Democratic Party for various demographic reasons, or because their families were Democrats, or because they’d seen campaign ads, etc. These biases could certainly exist. But if I didn’t even mention that there might be similar biases making people support the Republican Party, let alone try to estimate which was worse, I’m not sure this would qualify as sociopolitical analysis.
4. I was trying to explain why people in a field might prefer that members of the field address disagreements through internal channels rather than the media, for reasons other than that they have a conspiracy of silence. I’m not sure what you mean by “concrete criticisms”. You cherry-picked some reasons for believing long timelines; I agree these exist. There are other arguments for believing shorter timelines and that people believing in longer timelines are “duped”. What it sounded like you were claiming is that the overall bias is in favor of making people believe in shorter ones, which I think hasn’t been proven.
I’m not entirely against modeling sociopolitical dynamics, which is why I ended the sentence with “at this level of resolution”. I think a structured attempt to figure out whether there were more biases in favor of long timelines or short timelines (for example, surveying AI researchers on what they would feel uncomfortable saying) would be pretty helpful. I interpreted this post as more like the Democrat example in 3 - cherry-picking a few examples of bias towards short timelines, then declaring short timelines to be a scam. I don’t know if this is true or not, but I feel like you haven’t supported it.
Bayes Theorem says that we shouldn’t update on information that you could get whether or not a hypothesis were true. I feel like you could have written an equally compelling essay “proving” bias in favor of long timelines, of Democrats, of Republicans, or of almost anything; if you feel like you couldn’t, I feel like the post didn’t explain why you felt that way. So I don’t think we should update on the information in this, and I think the intensity of your language (“scam”, “lie”, “dupe”) is incongruous with the lack of update-able information.
1. For reasons discussed on comments to previous posts here, I’m wary of using words like “lie” or “scam” to mean “honest reporting of unconsciously biased reasoning”. If I criticized this post by calling you a liar trying to scam us, and then backed down to “I’m sure you believe this, but you probably have some bias, just like all of us”, I expect you would be offended. But I feel like you’re making this equivocation throughout this post.
2. I agree business is probably overly optimistic about timelines, for about the reasons you mention. But reversed stupidity is not intelligence. Most of the people I know pushing short timelines work in nonprofits, and many of the people you’re criticizing in this post are AI professors. Unless you got your timelines from industry, which I don’t think many people here did, them being stupid isn’t especially relevant to whether we should believe the argument in general. I could find you some field (like religion) where people are biased to believe AI will never happen, but unless we took them seriously before this, the fact that they’re wrong doesn’t change anything.3. I’ve frequently heard people who believe AI might be near say that their side can’t publicly voice their opinions, because they’ll get branded as loonies and alarmists, and therefore we should adjust in favor of near-termism because long-timelinists get to unfairly dominate the debate. I think it’s natural for people on all sides of an issue to feel like their side is uniquely silenced by a conspiracy of people biased towards the other side. See Against Bravery Debates for evidence of this.4. I’m not familiar with the politics in AI research. But in medicine, I’ve noticed that doctors who go straight to the public with their controversial medical theory are usually pretty bad, for one of a couple of reasons. Number one, they’re usually wrong, people in the field know they’re wrong, and they’re trying to bamboozle a reading public who aren’t smart enough to figure out that they’re wrong (but who are hungry for a “Galileo stands up to hidebound medical establishment” narrative). Number two, there’s a thing they can do where they say some well-known fact in a breathless tone, and then get credit for having blown the cover of the establishment’s lie. You can always get a New Yorker story by writing “Did you know that, contrary to what the psychiatric establishment wants you to believe, SOME DRUGS MAY HAVE SIDE EFFECTS OR WITHDRAWAL SYNDROMES?” Then the public gets up in arms, and the psychiatric establishment has to go on damage control for the next few months and strike an awkward balance between correcting the inevitable massive misrepresentations in the article while also saying the basic premise is !@#$ing obvious and was never in doubt. When I hear people say something like “You’re not presenting an alternative solution” in these cases, they mean something like “You don’t have some alternate way of treating diseases that has no side effects, so stop pretending you’re Galileo for pointing out a problem everyone was already aware of.” See Beware Stephen Jay Gould for Eliezer giving an example of this, and Chemical Imbalance and the followup post for me giving an example of this. I don’t know for sure that this is what’s going on in AI, but it would make sense.I’m not against modeling sociopolitical dynamics. But I think you’re doing it badly, by taking some things that people on both sides feel, applying it to only one side, and concluding that means the other is involved in lies and scams and conspiracies of silence (while later disclaiming these terms in a disclaimer, after they’ve had their intended shocking effect).I think this is one of the cases where we should use our basic rationality tools like probability estimates. Just from reading this post, I have no idea what probability Gary Marcus, Yann LeCun, or Steven Hansen has on AGI in ten years (or fifty years, or one hundred years). For all I know all of them (and you, and me) have exactly the same probability and their argument is completely political about which side is dominant vs. oppressed and who should gain or lose status (remember the issue where everyone assumes LWers are overly certain cryonics will work, whereas in fact they’re less sure of this than the general population and just describe their beliefs differently ). As long as we keep engaging on that relatively superficial monkey-politics “The other side are liars who are silencing my side!” level, we’re just going to be drawn into tribalism around the near-timeline and far-timeline tribes, and our ability to make accurate predictions is going to suffer. I think this is worse than any improvement we could get by making sociopolitical adjustments at this level of resolution.
I’ve actually been thinking about this for a while, here’s a very rough draft outline of what I’ve got:
1. Which questions are important? a. How should we practice cause prioritization in effective altruism? b. How should we think about long shots at very large effects? (Pascal’s Mugging) c. How much should we be focusing on the global level, vs. our own happiness and ability to lead a normal life? d. How do we identify gaps in our knowledge that might be wrong and need further evaluation? e. How do we identify unexamined areas of our lives or decisions we make automatically? Should we examine those areas and make those decisions less automatically?2. How do we determine whether we are operating in the right paradigm? a. What are paradigms? Are they useful to think about? b. If we were using the wrong paradigm, how would we know? How could we change it? c. How do we learn new paradigms well enough to judge them at all? 3. How do we determine what the possible hypotheses are? a. Are we unreasonably bad at generating new hypotheses once we have one, due to confirmation bias? How do we solve this? b. Are there surprising techniques that can help us with this problem?4. Which of the possible hypotheses is true? a. How do we make accurate predictions? b. How do we calibrate our probabilities? 5. How do we balance our explicit reasoning vs. that of other people and society? a. Inside vs. outside view? b. How do we identify experts? How much should we trust them? c. Does cultural evolution produce accurate beliefs? How willing should we be to break tradition? d. How much should the replication crisis affect our trust in science? e. How well does good judgment travel across domains? 6. How do we go from accurate beliefs to accurate aliefs and effective action? a. Akrasia and procrastination b. Do different parts of the brain have different agendas? How can they all get on the same page?7. How do we create an internal environment conducive to getting these questions right? a. Do strong emotions help or hinder rationality? b. Do meditation and related practices help or hinder rationality? c. Do psychedelic drugs help or hinder rationality?8. How do we create a community conducive to getting these questions right? a. Is having “a rationalist community” useful? b. How do strong communities arise and maintain themselves? c. Should a community be organically grown or carefully structured? d. How do we balance conflicting desires for an accepting where everyone can bring their friends and have fun, vs. high-standards devotion to a serious mission? e. How do we prevent a rationalist community from becoming insular / echo chambery / cultish? f. …without also admitting every homeopath who wants to convince us that “homeopathy is rational”? g. How do we balance the need for a strong community hub with the need for strong communities on the rim? h. Can these problems be solved by having many overlapping communities with slightly different standards?9. How does this community maintain its existence in the face of outside pressure?
I don’t think it’s necessarily greed.
Your doctor may be on a system where they are responsible for doing work for you (e.g. refilling your prescriptions, doing whatever insurance paperwork it takes to make your prescriptions go through, keeping track of when you need to get certain tests, etc) without receiving any compensation except when you come in for office visits. One patient like this isn’t so bad. Half your caseload like this means potentially hours of unpaid labor every day. Even if an individual doctor is willing to do this, high-level decision-makers like clinics and hospitals will realize this is a bad deal, make policies to avoid it, and pressure individual doctors to conform to the policies.
Also, your doctor remains very legally liable for anything bad that happens to you while you’re technically under their care, even if you never see them. If you’re very confused and injecting your insulin into your toenails every day, and then you get hyperglycemic, and your doctor never catches this because you never come into the office, you could sue them. So first of all, that means they’re carrying a legal risk for a patient they’re not getting any money from. And second of all, at the trial, your lawyer will ask “How often did you see so-and-so?” and the doctor will say “I haven’t seen them in years, I just kept refilling their prescription without asking any questions because they sent me an email saying I should”. And then they will lose, because being seen every three months is standard of care. Again, even if an individual doctor is overly altruistic and willing to accept this risk, high-level savvier entities like clinics and hospitals will institute and enforce policies against it. The clinic I work at automatically closes your chart and sends you a letter saying you are no longer our patient if you haven’t seen us in X months (I can’t remember what X is off the top of my head). This sounds harsh, but if we didn’t do it, then if you ever got sick after having seen us even once, it would legally be our fault. Every lawyer in the world agrees you should do this, it’s not some particular doctor being a jerk.
Also, a lot of people really do need scheduled appointments. You would be shocked how many people get much worse, are on death’s door, and I only see them when their scheduled three-monthly appointment rolls around, and I ask them “Why didn’t you come in earlier?!” and they just say something like they didn’t want to bother me, or didn’t realize it was so bad, or some other excuse I can’t possibly fathom (to be fair, many of these people are depressed or psychotic). This real medical necessity meshes with (more cynically provides a fig leaf for, but it’s not a fake fig leaf) the financial and legal necessity.
I’m not trying to justify what your doctor did to you. If it were me, I would have refilled your insulin, then sent you a message saying in the future I needed to see you every three months. But I’ve seen patients try to get out of this. They’ll wait until the last possible moment, then send an email saying “I am out of my life-saving medication, you must refill now!” If I send a message saying we should have an appointment on the books before I fill it, they’ll pretend they didn’t see that and just resend “I need my life-saving medication now!” If my receptionist tries to call, they’ll hang up. At some point I start feeling like I’m being held hostage. I really only have one patient who is definitely doing this, but it’s enough that I can understand why some doctors don’t want to have to have this fight and institute a stricter “no refill until appointment is on the books” policy.
I do think there are problems with the system, but they’re more like:
- A legal system that keeps all doctors perpetually afraid of malpractice if they’re not doing this (but what is the alternative?)
- An insurance system that doesn’t let doctors get money except through appointments. If the doctor just charged you a flat fee per year for being their patient, that would remove the financial aspect of the problem. Some concierge doctors do this, but insurances don’t work that way (but insurances are pretty savvy, are they afraid doctors would cheat?)
- The whole idea that you can’t access life-saving medication until an official gives you permission (but what would be the effects of making potentially dangerous medications freely available?)
I showed it that way because it made more sense to me. But if you want, see https://docs.google.com/spreadsheets/d/1xEkh4jhUup0qlG6EzBct6igvLPeRH4avpM5nZQ-dgek/edit#gid=478995971 for a graph by Paul where the horizontal axis is log(GDP); it is year-agnostic and shows the same pattern.
You may be interested in “Behavior: The Control Of Perception” by Will Powers, which has been discussed here a few times.
Thanks for this response.I mostly agree with everything you’ve said.While writing this, I was primarily thinking of reading books. I should have thought more about meeting people in person, in which case I would have echoed the warnings you gave about Michael. I think he is a good example of someone who both has some brilliant ideas and can lead people astray, but I agree with you that people’s filters are less functional (and charisma is more powerful) in the real-life medium. On the other hand, I agree that Steven Pinker misrepresents basic facts about AI. But he was also involved in my first coming across “The Nurture Assumption”, which was very important for my intellectual growth and which I think has held up well. I’ve seen multiple people correct his basic misunderstandings of AI, and I worry less about being stuck believing false things forever than about missing out on Nurture-Assumption-level important ideas (I think I now know enough other people in the same sphere that Pinker isn’t a necessary source of this, but I think earlier for me he was).There have been some books, including “Inadequate Equilibria” and “Zero To One”, that have warned people against the Outside View/EMH. This is the kind of idea that takes the safety wheels off cognition—it will help bright people avoid hobbling themselves, but also give gullible people new opportunities to fail. And there is no way to direct it, because non-bright, gullible people can’t identify themselves as such. I think the idea of ruling geniuses in is similarly dangerous, in that there’s no way to direct it only to non-gullible people who can appreciate good insight and throw off falsehoods. You can only say the words of warning, knowing that people are unlikely to listen.
I still think on net it’s worth having out there. But the example you gave of Michael and of in-person communication in general makes me wish I had added more warnings.
I notice this isn’t showing up on the sidebar of SSC; if you want it to, consider tagging this as SSC here.
I support the opposite perspective—it was wrong to ever focus on individual winning and we should drop the slogan.
“Rationalists should win” was originally a suggestion for how to think about decision theory; if one agent predictably ends up with more utility than another, its choice is more “rational”.
But this got caught up in excitement around “instrumental rationality”—the idea that the “epistemic rationality” skills of figuring out what was true, were only the handmaiden to a much more exciting skill of succeeding in the world. The community redirected itself to figuring out how to succeed in the world, ie became a self-help group.
I understand the logic. If you are good at knowing what is true, then you can be good at knowing what is true about the best thing to do in a certain situation, which means you can be more successful than other people. I can’t deny this makes sense. I can just point out that it doesn’t resemble reality. Donald Trump continues to be more successful than every cognitive scientist and psychologist in the world combined, and this sort of thing seems to happen so consistently that I can no longer dismiss it as a fluke. I think it’s possible (and important) to analyze this phenomenon and see what’s going on. But the point is that this will involve analyzing a phenomenon—ie truth-seeking, ie epistemic rationality, ie the thing we’re good at and which is our comparative advantage—and not winning immediately.
Remember the history of medicine, which started with wise women unreflectingly using traditional herbs to cure conditions. Some very smart people like Hippocrates came up with reasonable proposals for better ideas, and it turned out they were much worse than the wise women. After a lot of foundational work they eventually became better than the wise women, but it took two thousand years, and a lot of people died in the meantime. I’m not sure you can short-circuit the “spend two thousand years flailing around and being terrible” step. It doesn’t seem like this community has.
And I’m worried about the effects of trying. People in the community are pushing a thousand different kinds of woo now, in exactly the way “Schools Proliferating Without Evidence” condemned. This is not the fault of their individual irrationality. My guess is that pushing woo is an almost inevitable consequence of taking self-help seriously. There are lots of things that sound like they should work, and that probably work for certain individual people, and it’s almost impossible to get the funding or rigor or sample size that you would need to study it at any reasonable level. I know a bunch of people who say that learning about chakras has done really interesting and beneficial things for them. I don’t want to say with certainty that they aren’t right—some of the chakras have a suspicious correspondence to certain glands or bundles of nerves in the body, and for all I know maybe it’s just a very strange way of understanding and visualizing those nerves’ behavior. But there’s a big difference between me saying “for all I know maybe...” and a community where people are going around saying “do chakras! they really work!” But if you want to be a self-help community, you don’t have a lot of other options.
I think my complaint is: once you become a self-help community, you start developing the sorts of epistemic norms that help you be a self-help community, and you start attracting the sort of people who are attracted to self-help communities. And then, if ten years later, someone says “Hey, are we sure we shouldn’t go back to being pure truth-seekers?”, it’s going to be a very different community that discusses the answer to that question.
We were doing very well before, and could continue to do very well, as a community about epistemic truth-seeking mixed with a little practical strategy. All of these great ideas like effective altruism or friendly AI that the community has contributed to, are all things that people got by thinking about, by trying to understand the world and avoid bias. I don’t think the rationalist community’s contribution to EA has been the production of unusually effective people to man its organizations (EA should focus on “winning” to be more effective, but no moreso than any other movement or corporation, and they should try to go about it in the same way). I think rationality’s contribution has been helping carve out the philosophy and convince people that it was true, after which those people manned its organizations at a usual level of effectiveness. Maybe rationality also helped develop a practical path forward for those organizations, which is fine and a more limited and relevant domain than “self-help”.
I’m a little confused. The explanation you give would explain why people might punish pro-social punishers, but it doesn’t really give insight into why they would punish cooperators. Is the argument that cooperators are likely to also be pro-social punishers? Or am I misunderstanding the structure of the game?
I agree Evan’s intentions are good, and I’m glad that someone interesting who wants to criticize my writing is getting a chance to speak. I’m surprised this is downvoted as much as it has been, and I haven’t downvoted it myself.
My main concern is with the hyperbolic way this was pitched and the name of the meetup, which I understand were intended kind of as jokes but which sound kind of creepy to me when I am the person being joked about. I don’t think Evan needs to change these if he doesn’t want to, but I do just want to register the concern.