What I cannot do is predict that they are wrong, and wait for events to prove me right. There is no judgment day. No profit stream. No right. No wrong.
I’m aware of one counterexample to this: Salvator Mundi was thought to be a copy of the original piece by da Vinci, was restored and authenticated as the original, and then increased in value enormously (because there are less than 20 da Vincis). But the contemporary art market is very much not about that sort of archaeological discovery or restoration, but is instead about who the artist is and who they know.
The best advice is tailored to individuals, and the best explanations are targeted at avoiding or uninstalling specific confusions, instead of just pointing at the concept. But here I think the right call is giving evidence for ‘a’ reader instead of TurnTrout. So, a general case for rationality:
First, by rationality I mean a focus on cognitive process rather than a specific body of conclusions or thoughts. The Way is the art that pursues a goal, not my best guess at how to achieve that goal.
Why care about cognitive process? A few factors come to mind:
1) You’re stuck doing cognition, and you might want to do it better. Using your process to focus on your process can actually stabilize and improve things; see Where Recursive Justification Hits Bottom and The Lens that Sees Its Own Flaws.
2) Studying the creation of tools lets you know what tool to use when. Rather than reflexively bouncing from moment to moment, you can be deliberately doing things that you expect to help.
3) As a special case of 2, sometimes, it’s important to get things right on the first try. This means we can’t rely on processes that require lots of samples (like empiricism) and have to instead figure out what’s going on in a way that lets us do the right thing, which often also involves figuring out what sorts of cognitive work would validly give us that hope.
4) Process benefits tend to accumulate. If I expend effort and acquire food for myself today, I will be in approximately the same position tomorrow; if I expend effort and establish a system that provides me with food, I will be in a different, hopefully better position tomorrow.
Who shouldn’t care about rationality? First, for any task where the correct strategy to employ is either ‘obvious’ or ‘unintuitive but known to a tradition’, then the benefits of thinking it through yourself are much lower. Second, to the extent that most rationality techniques that we know route through “think about it,” the more expensive thinking is, the less useful the rationality techniques become.
Boarding houses used to be quite close to this, and I would love for the EA / Rationality community to have more of those. But also they fell out of favor for a reason (mostly legal, I think, but perhaps also increased wealth and housing stock). In particular, it seems like the person being more of the house manager (who selects guests as they desire / ultimately owns the house) than the house keeper (who is dependent on the goodwill of their fellow tenants) makes the system more sustainable / polishes some of the rough edges.
Homemakers are still around, though, and my sense is when there’s a group house that has something of this flavor, it’s because there’s a house affordable on one or two programmer salaries that is large enough for ~8 people, and so there’s a space for spouse/boyfriend/girlfriend whose primary contribution is ‘being part of the family’ and ‘making the space nice.’ There it seems important that they’re part of the family instead of a Hufflepuff recruited from the hinterlands primarily to act as a servant.
[Note that it’s particularly weird to have a master-servant relationship coincide with a ‘community building’ role; if everyone likes Alice’s parties thrown at The House, and Alice is friends with everyone at The House, it’s a little weird for The House to fire Alice for not keeping the place as tidy as they like, because presumably that damages the friendships and the broader community fabric (since, say, Alice might not be a fan of The House anymore).]
When I was at Event Horizon, I was one of the people voting that we should spend more on the house manager, but also at the time about a third (maybe even half?) of residents of Event Horizon were living on runway, and so a 10% increase in rent would mean 9% less time to lift off. And with 10 people (the size of a more normal house), this just covers rent for the house manager; being able to pay them a somewhat reasonable salary looks more like a 20% or 30% increase in rent.
It’s had a huge range, from about a dozen to just over twenty.
This is the carpet, and these are the backjacks, both of which were found by Duncan.
How does the recursion bottom out? If real Hugh’s response to the question is to ask the machine, then perfectly simulated Hugh’s response must be the same. If real Hugh’s response is not to ask the machine, then the machine remains unused.
I think there are lots of strategies here that just fail to work. For example, if Hugh passes on the question with no modification, then you build an infinite tower that never does any work.
But there are strategies that do work. For example, whenever Hugh receives a question he can answer, he does so, and whenever he receives a question that is ‘too complicated’, he divides it into subquestions and consults HCH separately on each subquestion, using the results of the consultation to compute the overall answer. This looks like it will terminate, so long as the answers can flow back up the pyramid. Hugh could also pass along numbers about how subdivided a question has become, or the whole stack trace so far, in case there are problems that seem like they have cyclical dependencies (where I want to find out A, which depends on B, which depends on C, which depends on A, which depends on...). Hugh could pass back upwards results like “I didn’t know how to make progress on the subproblem you gave me.”
For example, you could imagine attempting to prove a mathematical conjecture. The first level has Hugh looking at the whole problem, and he thinks “I don’t know how to solve this, but I would know how to solve it if I had lemmas like A, B, and C.” So he asks HCH to separately solve A, B, and C. This spins up a copy of Hugh looking at A, who also thinks “I don’t know how to solve this, but I would if I had lemmas like Aa, Ab, and Ac.” This spins up a copy of Hugh looking at Aa, who thinks “oh, this is solvable like so; here’s a proof of Aa.” Hugh_A is now looking at the proofs, disproofs, and indeterminates of Aa, Ab, and Ac, and now can either write their conclusion about A, or spins up new subagents to examine new subparts of the problem.
Note that in this formulation, you primarily have communication up and down the pyramid, and the communication is normally at the creation and destruction of subagents. It could end up that you prove the same lemma thousands of times across the branches of the tree, because it turned out to be useful in many different places.
My read of this thread is that your (Andaro’s) original comment pointed at a particular subset of relationships, which are ‘bad’ but seem better than the alternatives to the person inside them, where the reason to trust the judgment of the person inside them is that right to exit means they will leave relationships that are better than their alternatives. Paperclip Maximizer then pointed out that a major class of reasons people stay in abusive relationships is that their alternatives are manipulated by the abuser, either through explicit or implicit threats or attacks directed at the epistemology (such that the alternatives are difficult to imagine or correctly weigh).
I understood Paperclip Maximizer’s point to be that there’s a disconnect between the sort of relationships you describe in the ancestral comment and what a ‘typical’ abusive relationship might look like; it might be highly difficult to determine whether “right to exit” is being denied or not. (For example, in #12, the primary factor preventing exit is the pride of the person stuck in the relationship. Is that their partner blocking exercising the right?) If this disconnect exists as a tradeoff, such that the more a relationship involves reducing right to exit the more we suspect that relationship is (or could be) abusive, then the original comment doesn’t seem germane; interpreted such that it’s true, it’s irrelevant, and interpreted such that it’s relevant, it’s untrue.
PSA: His name is spelled “Eliezer.”
Suppose that different humans have different selection criteria when deciding to share a meme. …
Nowadays, memes can specialize to focus onto tiny subsets of the population.
One difference between ‘the past’ and ‘the present’ that Eliezer doesn’t mention, but which is relevant to the question of selection effects, is to what extent memes are spread by ‘thought leaders’ (who are typically optimizing for multiple things, and have some sense of responsibility) and to what extent memes are spread ‘peer-to-peer.’ Whether this improves or degrades selection on the relevant criteria obviously depends on the incentives involved, but with ‘general reasonableness’ it’s clear to see how a pundit is incentivized to appeal to other pundits (of all camps) whereas a footsoldier is incentivized to appeal to other footsoldiers. (One common point among the base of both left and right appears to be distrust of the party elite, which is often seen as being too willing to cooperate with the other side—imagine how they might react to the party elite of a century ago, before the increased polarization!)
And so, if more and more of the political conversation becomes “signalling on Facebook pages” instead of “editorials in the national paper of record”, it’s clear to see how reasonableness could be modeled less, and thus adopted less.
I think it’s possible (and important) to analyze this phenomenon and see what’s going on. But the point is that this will involve analyzing a phenomenon—ie truth-seeking, ie epistemic rationality, ie the thing we’re good at and which is our comparative advantage—and not winning immediately.
I mostly agree with this, but want to point at something that your comment didn’t really cover, that “whether to go to the homeopath or the doctor” is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you’ve separated it into “what advice should I follow?” and “what advice is out there?“]
But this requires that the question of how to evaluate strategies be framed more in terms of “I used my judgment to weigh evidence” and less in terms of “I followed the prestige” or “I compared the lengths of their articulated justifications” or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 2000 will wrongly pick homeopathy, but ideally a rationalist would switch from homeopathy to doctors as the actual facts on the ground change.
This doesn’t mean a rationalist in 1820 should be satisfied with homeopathy; it should be known to them as a temporary plug to a hole in their map. But that also doesn’t mean it’s the most interesting or important hole in their map; probably then they’d be most interested in what’s up with electricity. [Similarly, today I’m somewhat confused about what’s going on with diet, and have some ‘reasonable’ guesses and some ‘woo’ guesses, but it’s clearly not the most interesting hole in my map.]
And so my sense is a rationalist in 2018 should know what they know, and what they don’t, and be scientific about things to the degree that they capture their curiosity (which relates both to ‘irregularities in the map’ and ‘practically useful’). Which is basically how I read your comment, except that you seem more worried about particular things than I am.
This seems broadly right to me, but it seems to me like metaheuristics (in the numerical optimization sense) are practical and have a structure like the one that you’re describing. Neural architecture search is the name people are using for this sort of thing in contemporary ML.
What’s different between them and the sort of thing you describe? Well, for one the softening is even stronger; rather than a performance-weighted average across all strategies, it’s a performance-weighted sampling strategy that has access to all strategies (but will only actually evaluate a small subset of them). But it seems like the core strategy—be both doing object-level cognition and meta-level cognition about how you’re doing object-level cognitive—is basically the same.
It remains unclear to me whether the right way to find these meta-strategies is something like “start at the impractical ideal and rescue what you can” or “start with something that works and build new features”; it seems like modern computational Bayesian methods look more like the former than the latter. When I think about how to describe human epistemology, it seems like computationally bounded Bayes is a promising approach (where probabilities change both by the standard updates among hypotheses that already exist, and new operations to be formalized to add or remove hypotheses; you want to be able to capture “Why didn’t you assign high probability to X?” “Because I didn’t think of it; now that I have, I do.“). But of course I’m using my judgment that already works to consider adding new features here, rather than having built how to think out of rescuing what I can from the impractical ideal of how to think.
although typically rewards in RL depend only on states,
Presumably this should be a period? (Or perhaps there’s a clause missing pointing out the distinction between caring about history and caring about states, tho you could transform one into the other?)
I’m glad you finally got around to watching it! I stopped watching new episodes as they were coming out around season 6, but would still catch up occasionally until about midway through season 7, where I’ve been stuck for a while. This seems like as good an impetus as any to make it up to the end of season 8.
One thing worth mentioning about fanfiction—which I originally read from Bad Horse but couldn’t find the original source—is that one benefit ponyfic has over other source materials is that you can write basically any story in the Equestria universe, enabling fanfic ‘about people’ rather than, say, ‘about wizards’ or ‘about vampires’ or ‘about ninjas’ or so on. I could more easily find his claim that fimfiction is just better than other fanfiction sites, from a UI perspective.
They don’t think it’s pretty likely to succeed…
How worth doing something is depends on the product of its success chance and its payoff, but it’s not clear that anticipations of goodness scale as much as consequences of goodness do, which could lead to predictably unmotivating plans (which ‘should be’ motivating).
This is what I mean when I say that presentation of Double Crux is logical, instead of probabilistic. The version of double crux that I use is generally probabilistic, and I claim is an obvious modification of the logical version.
if almost everybody I interact with is an atheist, and therefore I don’t feel the need to convince them of atheism, does that mean that I believe atheism is unproductive?
I note an important distinction between “don’t feel the need to preach to the choir” and “don’t feel the need to hold people accountable for X”. It’s one thing if I’m operating in a high trust environment where people almost never steal from each other, and so policies that reduce the risk of theft or agitating against theft seem like a waste of time, and it’s another thing if I should shrug off thefts when I witness them because thefts are pretty rare, all things considered.
By analogy, it seems pretty important if theism is in the same category as ‘food preferences’ (where I would hassle Alice if Alice hassled Bob over Bob liking the taste of seaweed) or as ‘theft’ (where I would hassle Alice over not hassling Bob over Bob stealing things). (Tolerate tolerance, coordinate meanness, etc.)
[Edit: to be clear, I don’t think theism is obviously in the same category as stealing, but I think it is clearly an intellectual mistake and have a difficult time trusting the thinking of someone who is theist for reasons that aren’t explicitly social, and when deciding how much to tolerate theism one of the considerations is something like “what level of toleration leads to the lowest number of theists in the long-run, or flips my view on atheism?“.]
I think I basically agree with your position, when it comes to ‘whether the Sequences are useful’ (yes) and ‘the population as a whole’ (not reading them enough). Most of the people I’m thinking of as counterexamples are exceptional people who are expert in something related who are closely affiliated with core members of the community, such that they could have had their visible rough edges polished off over significant interactions over the course of the last few years. But even then, I expect them to benefit from reading the Sequences, because it might expose invisible rough edges. Maybe there’s an experiment here, around paying people in the reference class that seem unexposed but not underexposed to read more of the Sequences and determine whether or not they were actually underexposed?
Then there is the fact that if you’re a member of a church, you are treated to regular sermons, in which someone who knows far more of the Bible than you do, and whose job, in fact, it is to know all about what’s in the Bible, what’s important, etc., tells you all about these things, explains them to you, explains what to do with that information, etc.
Yeah, I think things like this would be quite good, and also possibly quite popular. Maybe this should even be a rationalist podcast, or something? I also note it feels closer to Slate Star Codex posts to me than Sequence posts, for some reason; it feels like Scott is often looking at something object-level and saying “and this connects to consideration X from the Sequences” in a way that makes it feel like a sermon on X with a particular example.
I know the priming studies got eviscerated, but the last time I looked into this I couldn’t exactly find an easy list of “famous psychology studies that didn’t replicate” to compare against.
My understanding is that even this story is more complicated; Lauren Lee summarizes it on Facebook as follows:
OK, the Wikipedia article on priming mostly refers to effects of the first kind (faster processing on lexical decision tasks and such) and not the second kind (different decision-making or improved performance in general).
So uh. To me it just looks like psych researchers over-loaded the term ‘priming’ with a bunch of out-there hypotheses like “if the clipboard you’re holding is heavy, you are more likely to ‘feel the significance’ of the job candidate.” I mean REALLY, guys. REALLY.
Priming has been polluted, and this is a shame.
I would not be surprised if most of the references in the Sequences are to old-school definitions of various terms that are more likely to survive, which complicates the research task quite a bit.
The atheism/secularism that permeates the Sequences and which was an explicit assumption and policy of Less Wrong 7–9 years ago would get you heavily downvoted and censured here, and lambasted and possibly banned on SSC.
I am surprised by this claim, and would be interested in seeing examples. In 2014 (closer to 7 years ago than today), Scott wrote this:
Were we ever this stupid? Certainly I got in fights about “can you still be an atheist rather than an agnostic if you’re not sure that God doesn’t exist,” and although I took the correct side (yes, you can), it didn’t seem like oh my god you are such an idiot for even considering this an open question HOW DO YOU BELIEVE ANYTHING AT ALL WITH THAT MINDSET.
Now, that’s about a different question than “is God real or not?” (in the comments, Scott mentions Leah and the ~7% of rationalists who are theists).
In the R:AZ preface, Eliezer writes this:
My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.
Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.
Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)
In 2017, Scott writes How Did New Atheism Fail So Miserably?, in a way that signals that Scott is not a New Atheist and is mostly annoyed by them, but is confused by why the broader culture is so annoyed by them. But the sense that someone who is all fired up about God not being real would be ‘boring at parties’ is the sense that I get from Scott’s 2017 post, and the sense that I get from reading Scott in 2014, or what I remember from LW in 2012. Which is quite different from “would get you banned” or religion being a protected class.
Like, when I investigate my own views, it seems to me like spending attention criticizing supernaturalist religions is unproductive because 1) materialist reductionism is a more interesting and more helpful positive claim that destroys supernaturalist religion ‘on its own’, and 2) materialist religions seem like quite useful tools that maybe we should be actively building, and allergies to supernaturalist religions seem unhelpful in that regard. This doesn’t feel like abandoning the perspective and adopting the opposite, except for that bit where the atheist has allergies to the things that I want to do and I think those allergies are misplaced, so it at least feels like it feels that way to them.
I discussed a few of the points here with some people at the MIRI lunch table, and Scott Garrabrant pointed out “hey, I loudly abandoned Bayesianism!“. That is, we always knew that the ideal Bayesianism required infinite computation (you don’t just consider a few hypotheses, but all possible hypotheses) and wouldn’t work for embedded agents, and as MIRI became more interested in embedded agency they started developing theories of how that works. There was some discussion of how much this aligned with various people’s claim that the quantitative side of Bayes wasn’t all that practical for humans (with, I think, the end result being seeing them as similar).
For example, in August 2013 there was some discussion of Chapman’s Pop Bayesianism, where I said:
I think the actual quantitative use of Bayes is not that important for most people. I think the qualitative use of Bayes is very important, but is hard to discuss, and I don’t think anyone on LW has found a really good way to do that yet. (CFAR is trying, but hasn’t come up with anything spectacular so far to the best of my knowledge.)
Then Scott Alexander responded, identifying Bayesianism in contrast to other epistemologies, and I identified some qualitative things I learned from Bayes, as did Tyrrell McAllister.
How does this hold up, five years later?
I still think Bayesianism as synthesis of Aristotelianism and Anton-Wilsonism is superior to both; I think the operation underlying Bayesianism for embedded agents is not Bayesian updating, but rather something that approaches Bayesian updating in the limit, and that one of the current areas of progress in rationality is grappling with what’s actually going there. (Basically, this is because of the standard Critical Rationalist critique of Bayesianism, that the Bayesian view says the equivalent of “well, you just throw infinite compute at the problem to consider the superset of all possible answers, and then you’re good” which is not useful advice to current practicing scientists. But the CR answer doesn’t appear to be good enough either.)
I think basically the same things I did then—that the actual quantitative use of Bayes is not that important for most people, and that CFAR’s techniques for talking about the qualitative use of Bayes mostly don’t refer to Bayes directly. I don’t think this state of affairs represents a ‘school without practitioners’ / I still disagree with The Last Rationalist’s assessment of things, but perhaps I’m missing what TLR is trying to point at.