I was going off absence of evidence (the paper didn’t say anything other than taking the top 2%), so if anyone else has positive evidence that outweighs what I’m saying.
I agree much of psychology etc are bad for the reasons you state, but this doesn’t seem to be because everyone else has fried their brains by trying to simulate how to appease triskaidekaphobics too much. It’s because the actual triskaidekaphobics are the ones inventing the psychology theories. I know a bunch of people in academia who do various verbal gymnastics to appease the triskaidekaphobics, and when you talk to them in private they get everything 100% right.I agree that most people will not literally have their buildings burned down if they speak out against orthodoxies (though there’s a folk etymology for getting fired which is relevant here). But I appreciate Zvi’s sequence on super-perfect competition as a signpost of where things can end up. I don’t think academics, organization leaders, etc. are in super-perfect competition the same way middle managers are, but I also don’t think we live in the world where everyone has infinite amounts of slack to burn endorsing taboo ideas and nothing can possibly go wrong.
I think you might be wrong about how fraud is legally defined. If the head of Pets.com says “You should invest in Pets.com, it’s going to make millions, everyone wants to order pet food online”, and then you invest in them, and then they go bankrupt, that person was probably biased and irresponsible, but nobody has committed fraud.
If Raleigh had simply said “Sponsor my expedition to El Dorado, which I believe has lots of gold”, that doesn’t sound like fraud either. But in fact he said:
For the rest, which myself have seen, I will promise these things that follow, which I know to be true. Those that are desirous to discover and to see many nations may be satisfied within this river, which bringeth forth so many arms and branches leading to several countries and provinces, above 2,000 miles east and west and 800 miles south and north, and of these the most either rich in gold or in other merchandises. The common soldier shall here fight for gold, and pay himself, instead of pence, with plates of half-a-foot broad, whereas he breaketh his bones in other wars for provant and penury. Those commanders and chieftains that shoot at honour and abundance shall find there more rich and beautiful cities, more temples adorned with golden images, more sepulchres filled with treasure, than either Cortes found in Mexico or Pizarro in Peru. And the shining glory of this conquest will eclipse all those so far-extended beams of the Spanish nation.
There were no Indian cities, and essentially no gold, anywhere in Guyana.
I agree with you that lots of people are biased! I agree this can affect their judgment in a way somewhere between conflict theory and mistake theory! I agree you can end up believing the wrong stories, or focusing on the wrong details, because of your bias! I’m just not sure that’s how fraud works, legally, and I’m not sure it’s an accurate description of what Sir Walter Raleigh did.
What exactly is contradictory? I only skimmed the relevant pages, but they all seemed to give a pretty similar picture. I didn’t get a great sense of exactly what was in Raleigh’s book, but all of them (and whoever tried him for treason) seemed to agree it was somewhere between heavily exaggerated and outright false, and I get the same impression from the full title “The discovery of the large, rich, and beautiful Empire of Guiana, with a relation of the great and golden city of Manoa (which the Spaniards call El Dorado)”
I’m confused by your confusion. The first paragraph establishes that Raleigh was at least as deceptive as the institutions he claimed to be criticizing. The second paragraph argues that if deceptive people can write famous poems about how they are the lone voice of truth in a deceptive world, we should be more careful about taking claims like that completely literally.
If you want more than that, you might have to clarify what part you don’t understand.
Questions that will be considered later, worth thinking about now, include: How does this persist? If things are so bad, why aren’t things way worse? Why haven’t these corporations fallen apart or been competed out of business? Given they haven’t, why hasn’t the entire economy collapsed? Why do regular people, aspirant managers and otherwise, still think of these manager positions as the ‘good jobs’ as opposed to picking up pitchforks and torches?
I hope you also answer a question I had when I was reading this: it’s percolated down into common consciousness that some jobs are unusually tough and demanding. Medicine, finance, etc have reputations for being grueling. But I’d never heard that about middle management and your picture of middle management sounds worse than either. Any thoughts on why knowledge of this hasn’t percolated down?
Walter Raleigh is also famous for leading an expedition to discover El Dorado. He didn’t find it, but he wrote a book saying that he definitely had, and that if people gave him funding for a second expedition he would bring back limitless quantities of gold. He got his funding, went on his second expedition, and of course found nothing. His lieutenant committed suicide out of shame, and his men decided the Spanish must be hoarding the gold and burnt down a Spanish town. On his return to England, Raleigh was tried for treason based on a combination of the attack on Spain (which England was at peace with at the time) and defrauding everyone about the El Dorado thing. He was executed in 1618.For conflict theorists, the moral of this story is that accusing everyone else of being lying and corrupt can sometimes be a strategy con men use to deflect suspicion. For mistake theorists, the moral is that it’s really easy to talk yourself into a biased narrative where you are a lone angel in a sea full of corruption, and you should try being a little more charitable to other people and a little harsher on yourself.
In this post and the previous one you linked to, you do a good job explaining why your criterion e is possible / not ruled out by the data. But can you explain more about what makes you think it’s true? Maybe this is part of the standard predictive coding account and I’m just misunderstanding it, if so can you link me to a paper that explains it?I’m a little nervous about the low-confidence model of depression, both for some of the reasons you bring up, and because the best fits (washed-out visual field and psychomotor retardation) are really marginal symptoms of depression that you only find in a few of the worst cases. The idea of depression as just a strong global negative prior (that makes you interpret everything you see and feel more negatively) is pretty tempting. I like Friston’s attempt to unify these by saying that bad mood is just a claim that you’re in an unpredictable environment, with the reasoning apparently being something like “if you have no idea what’s going on, probably you’re failing” (eg if you have no idea about the social norms in a given space, you’re more likely to be accidentally stepping on someone’s toes than brilliantly navigating complicated coalitional politics by coincidence). I’m not sure what direction all of this happens in. Maybe if your brain’s computational machinery gets degraded by some biochemical insult, it widens all confidence intervals since it can’t detect narrow hits, this results in fewer or weaker positive hits being detected, this gets interpreted as an unpredictable world, and this gets interpreted as negative prior on how you’re doing?
Things sometimes get bad. Once things get sufficiently bad that no one can deviate from short-term selfish actions or be a different type of person without being wiped out, things are no longer stable. People cheat on long term investments, including various combinations of things such as having and raising children, maintaining infrastructure and defending norms. The seed corn gets eaten. Eventually, usually when some random new threat inevitably emerges, the order collapses, and things start again. The rise and fall of civilizations.
I’m wondering if you’re thinking of https://slatestarcodex.com/2019/08/12/book-review-secular-cycles/ . I think that was what made me realize things worked this way, and it was indeed a big update on the standard narrative. I still haven’t decided whether this is just a quirk of systems that have certain agriculture-related dynamics, or a more profound insight about systems in general. I look forward to reading more of what you have to say about this.I think my answer (not yet written up) to why things aren’t worse has something to do with competitions on different time scales—if you have more than zero slack, you want to devote a small amount of your budget to R&D, and then you’ll win a long-run competition against a company that doesn’t do this. Integrate all the different possible timescales and this gets so confusing that maybe the result barely looks like competition at all. I’ve been having trouble writing this up and am interested in seeing if you’re thinking something similar. Again, really looking forward to reading more.
At the risk of being self-aggrandizing, I think the idea of axiology vs. morality vs. law is helpful here.
“Don’t be misleading” is an axiological commandment—it’s about how to make the world a better place, and what you should hypothetically be aiming for absent other considerations.
“Don’t tell lies” is a moral commandment. It’s about how to implement a pale shadow of the axiological commandment on a system run by duty and reputation, where you have to contend with stupid people, exploitative people, etc.
(so for example, I agree with you that the Rearden Metal paragraph is misleading and bad. But it sounds a lot like the speech I give patients who ask for the newest experimental medication. “It passed a few small FDA trials without any catastrophic side effects, but it’s pretty common that this happens and then people discover dangerous problems in the first year or two of postmarketing surveillance. So unless there’s some strong reason to think the new drug is better, it’s better to stick with the old one that’s been used for decades and is proven safe.” I know and you know that there’s a subtle difference here and the Institute is being bad while I’m being good, but any system that tries to implement reputation loss for the Institute at scale, implemented on a mob of dumb people, is pretty likely to hurt me also. So morality sticks to bright-line cases, at the expense of not being able to capture the full axiological imperative.)
This is part of what you mean when you say the report-drafting scientist is “not a bad person”—they’ve followed the letter of the moral law as best they can in a situation where there are lots of other considerations, and where they’re an ordinary person as opposed to a saint laser-focused on doing the right thing at any cost. This is the situation that morality (as opposed to axiology) is designed for, your judgment (“I guess they’re not a bad person”) is the judgment that morality encourages you to give, and this shows the system working as designed, ie meeting its own low standards.
And then the legal commandment is merely “don’t outright lie under oath or during formal police interrogations”—which (impressively) is probably *still* too strong, in that we all hear about the police being able to imprison basically whoever they want by noticing small lies committed by accident or under stress.
The “wizard’s oath” feels like an attempt to subject one’s self to a stricter moral law than usual, while still falling far short of the demands of axiology.
EDIT: Want to talk to you further before I try to explain my understanding of your previous work on this, will rewrite this later.
The short version is I understand we disagree, I understand you have a sophisticated position, but I can’t figure out where we start differing and so I don’t know what to do other than vomit out my entire philosophy of language and hope that you’re able to point to the part you don’t like. I understand that may be condescending to you and I’m sorry.
I absolutely deny I am “motivatedly playing dumb” and I enter this into the record as further evidence that we shouldn’t redefine language to encode a claim that we are good at ferreting out other people’s secret motivations.
I say “strategic” because it is serving that strategic purpose in a debate, not as a statement of intent. This use is similar to discussion of, eg, an evolutionary strategy of short life histories, which doesn’t imply the short-life history creature understands or intends anything it’s doing.
It sounds like normal usage might be our crux. Would you agree with this? IE that if most people in most situations would interpret my definition as normal usage and yours as a redefinition project, we should use mine, and vice versa for yours?
Sorry it’s taken this long for me to reply to this.”Appeal to consequences” is only a fallacy in reasoning about factual states of the world. In most cases, appealing to consequences is the right action. For example, if you want to build a house on a cliff, and I say “you shouldn’t do that, it might fall down”, that’s an appeal to consequences, but it’s completely valid.Or to give another example, suppose we are designing a programming language. You recommend, for whatever excellent logical reason, that all lines must end with a semicolon. I argue that many people will forget semicolons, and then their program will crash. Again, appeal to consequences, but again it’s completely valid.I think of language, following Eliezer’s definitions sequence, as being a human-made project intended to help people understand each other. It draws on the structure of reality, but has many free variables, so that the structure of reality doesn’t constrain it completely. This forces us to make decisions, and since these are not about factual states of the world (eg what the definition of “lie” REALLY is, in God’s dictionary) we have nothing to make those decisions on except consequences. If a certain definition will result in lots of people misunderstanding each other, bad people having an easier time confusing others, good communication failing to occur, or other bad things, then it’s fine to decide against it based on those grounds, just as you can decide against a programming language decision on the grounds that it will make programs written in it more likely crash, or require more memory, etc.I am not sure I get your point about the symmetry of strategic equivocation. I feel like this equivocation relies on using a definition contrary to its common connotations. So if I was allowed to redefine “murderer” to mean “someone who drinks Coke”, then I could equivocate “Alice who is a murderer (based on the definition where she drinks Coke)” and also “Murderers should be punished (based on the definition where they kill people) and combine them to get “Alice should be punished”. The problem isn’t that you can equivocate between any two definitions, the problem is very specifically when we use a definition counter to the way most people traditionally use it. I think (do you disagree?) that most people interpret “liar” to mean an intentional liar. As such, I’m not sure I understand the relevance of the Ruby’s coworkers example.I think you’re making too hard a divide between the “Hobbesian dystopia” where people misuse language, versus a hypothetical utopia of good actors. I think of misusing language as a difficult thing to avoid, something all of us (including rationalists, and even including me) will probably do by accident pretty often. As you point out regarding deception, many people who equivocate aren’t doing so deliberately. Even in a great community of people who try to use language well, these problems are going to come up. And so just as in the programming language example, I would like to have a language that fails gracefully and doesn’t cause a disaster when a mistake gets made, one that works with my fallibility rather than naturally leading to disaster when anyone gets something wrong.And I think I object-level disagree with you about the psychology of deception. I’m interpreting you (maybe unfairly, but then I can’t figure out what the fair interpretation is) as saying that people very rarely lie intentionally, or that this rarely matters. This seems wrong to me—for example, guilty criminals who say they’re innocent seem to be lying, and there seem to be lots of these, and it’s a pretty socially important thing. I try pretty hard not to intentionally lie, but I can think of one time I failed (I’m not claiming I’ve only ever lied once in my life, just that this time comes to mind as something I remember and am particularly ashamed about). And even if lying never happened, I still think it would be worth having the word for it, the same way we have a word for “God” that atheists don’t just repurpose to mean “whoever the most powerful actor in their local environment is.”Stepping back, we have two short words (“lie” and “not a lie”) to describe three states of the world (intentional deception, unintentional deception, complete honesty). I’m proposing to group these (1)(2,3) mostly on the grounds that this is how the average person uses the terms, and if we depart from how the average person uses the terms, we’re inviting a lot of confusion, both in terms of honest misunderstandings and malicious deliberate equivocation. I understand Jessica wants to group them (1,2)(3), but I still don’t feel like I really understand her reasoning except that she thinks unintentional deception is very bad. I agree it is very bad, but we already have the word “bias” and are so in agreement about its badness that we have a whole blog and community about overcoming it.
Maybe I’m misunderstanding you, but I’m not getting why having the ability to discuss involves actually discussing. Compare two ways to build a triskaidekaphobic calculator.1. You build a normal calculator correctly, and at the end you add a line of code IF ANSWER == 13, PRINT: “ERROR: IT WOULD BE IMPOLITE OF ME TO DISCUSS THIS PARTICULAR QUESTION”.2. You somehow invent a new form of mathematics that “naturally” never comes up with the number 13, and implement it so perfectly that a naive observer examining the calculator code would never be able to tell which number you were trying to avoid.Imagine some people who were trying to take the cosines of various angles. If they used method (1), they would have no problem, since cosines are never 13. If they used method (2), it’s hard for me to imagine exactly how this would work but probably they would have a lot of problems.It sounds like the proposal you’re arguing against (and which I want to argue for) - not talking about taboo political issues on LW—is basically (1). We discuss whatever we want, we use logic which (we hope) would output the correct (taboo) answer on controversial questions, but if for some reason those questions come up (which they shouldn’t, because they’re pretty different from AI-related questions), we instead don’t talk about them. If for some reason they’re really relevant to some really important issue at some point, then we take the hit for that issue only, with lots of consultation first to make sure we’re not stuck in the Unilateralist’s Curse.This seems like the right answer even in the metaphor—if people burned down calculator factories whenever any of their calculators displayed “13”, and the sorts of problems people used calculators for almost never involved 13, just have the calculator display an error message at that number.(...plus doing other activism and waterline-raising work to deal with the fact that your society is insane, but that work isn’t going to look like having your calculators display 13 and dying when your factory burns down)
This project (best read in the bolded link, not just in this post) seemed and still seems really valuable to me. My intuitions around “Might AI have discontinuous progress?” become a lot clearer once I see Katja framing them in terms of concrete questions like “How many past technologies had discontinuities equal to ten years of past progress?”. I understand AI Impacts is working on an updated version of this, which I’m looking forward to.
I was surprised that this post ever seemed surprising, which either means it wasn’t revolutionary, or was *very* revolutionary. Since it has 229 karma, seems like it was the latter. I feel like the same post today would have been written with more explicit references to reinforcement learning, reward, addiction, and dopamine. The overall thesis seems to be that you can get a felt sense for these things, which would be surprising—isn’t it the same kind of reward-seeking all the way down, including on things that are genuinely valuable? Not sure how to model this.
It’s nice to see such an in-depth analysis of the CRT questions. I don’t really share drossbucket’s intuition—for me the 100 widget question feels counterintuitive the same way as the ball and bat question, but neither feels really aversive, so it was hard for me to appreciate the feelings that generated this post. But this gives a good example of an idea of “training mathematical intuitions” I hadn’t thought about before.
Many people pointed out that the real cost of a Bitcoin in 2011 or whenever wasn’t the couple of cents that it cost, but the several hours of work it would take to figure out how to purchase it. And that costs needed to be discounted by the significant risk that a Bitcoin purchased in 2011 would be lost or hacked—or by the many hours of work it would have taken to ensure that didn’t happen. Also, that there was another hard problem of not selling your 2011-Bitcoins in 2014. I agree that all of these are problems with the original post, and that they significantly soften the parts that depend on “everyone should have bought lots of Bitcoins in 2011”. Obviously in retrospect this still would have been the right choice, but it makes it much harder to claim it was obvious at the time.
I still endorse most of this post, but https://docs.google.com/document/d/1cEBsj18Y4NnVx5Qdu43cKEHMaVBODTTyfHBa8GIRSec/edit has clarified many of these issues for me and helped quantify the ways that science is, indeed, slowing down.
I still generally endorse this post, though I agree with everyone else’s caveats that many arguments aren’t like this. The biggest change is that I feel like I have a slightly better understanding of “high-level generators of disagreement” now, as differences in priors, contexts, and categorizations—see my post “Mental Mountains” for more.