Good post; you should put it in Main. It’s nice to see more stuff here on some of the aspects of instrumental rationality from HPMoR. Similarly, the two linked posts on your blog are brilliant; I’d been vaguely intending to write about them myself at some point, but you might have saved me the work.
KnaveOfAllTrades
Yes; apology is an underrated consequentialist tool among nerds.
Some of the social function of apology can be understood game theoretically: Apology explicitly disavows a past action, allowing the one to whom the apology was made to leverage that confession in future: If someone apologises for something then does it again, then response can escalate because we have evidence that they are doing it even knowing that it’s ‘wrong’. The person who apologised knows this, and often the implicit threat of escalation if they do the same thing checks their future behaviour. Therefore apology is (possibly among other things) a signal, where the cost to apologising is the greater susceptibility to escalation in future cases.
Apology falls into a class—along with other things such as forgiving misdeeds, forgetting misdeeds, retribution, punishing an agent against its will, compensation for misdeeds—of things that would make no sense among sufficiently advanced and cooperative rationalists. Some things in that class (e.g. forgiveness) might already have been transcended by LW, and others (e.g. apology) are probably not possible to transcend even on LW, because the knowledge of other participants (e.g. confidence of their cooperativeness) required to transcend apology is probably too high for an online community of this size.
I would guess that the Bay Area rationalist set and its associates—which as far as I can tell is by far the most advanced community in the world in terms of how consummately instrumental x-rationality is forged into their swords—apologizes way, way, way more than the average LW’er, just like they talk about/express their feelings way more than people on LW typically do, and win because they’re willing to confront that prospect of ‘being vulnerable’.
“Well,” said the boy. His eyes had not wavered from the Defense Professor’s. “I certainly regret hurting you, Professor. But I do not think the situation calls for me to submit to you. I never really did understand the concept of apology, still less as it applies to a situation like this; if you have my regrets, but not my submission, does that count as saying sorry?”
Again that cold, cold laugh, darker than the void between the stars.
“I wouldn’t know,” said the Defense Professor, “I, too, never understood the concept of apology. That ploy would be futile between us, it seems, with both of us knowing it for a lie. Let us speak no more of it, then. Debts will be settled between us in time.”
Two mistakes in thinking that my past self made a lot and others might also:
(1) Refusing to apologize if another party was ‘more wrong’. Even if you’re 99.9% right/innocent/blameless, you still have to make a choice between apologizing and not apologizing to the other person. If you refuse to apologize, things will probably get worse, because the other person thinks you’re more wrong than you think you are, and they will see you not apologizing as defecting. If you apologize in a smart way, you can give an apology (which shouldn’t make a difference but has the actual consequence where the other person is more probable to also apologise) without tying yourself down with too broad a commitment on your future behaviour, and without lying that you thought something was a mistake that wasn’t.
(2) Using the fact that, in the limit as rationality and cooperation become arbitrarily great, apology is meaningless, as a rationalization for not apologising, when in fact you just feel embarrassed/are generally untrained and therefore not fit enough to apologise, and you’re therefore avoiding the exertion of doing so.
I want to point out the difference between completely fake apologies for things one does not think were mistakes, and apologising for things that were mistakes even if the other person’s mistakes were much greater. The former is less often the smart thing to do, and the latter is a lot more often than one might think. Once you get fairly strong, you can sometimes even win free points by apologising in front of a big group of people for something that everyone but the other disputant think is completely outweighed by the other disputant’s actions.
E.g. ‘I’m sorry I used such an abrupt tone in asking you to desist from stealing my food; it probably put you on the defensive.’ If you really mean it (and you should, because you’re almost certainly not a perfect communicator and there were probably things you could have done better), then often onlookers will think you’re awesome and think the other person sucks for ‘making you’ apologise when you’d ‘done nothing wrong’. Sometimes even the other disputant will be so disarmed by your unwavering ‘politeness’ that they will realise the ridiculousness of the situation and realise that you’re being genuine and that they made a mistake, whereas when they thought you were a hostile opponent, it was much easier for them to rationalise that mistake.
Notice than in that example, your apology has not even constrained your future actions; everyone was so distracted by the ridiculousness of you apologising when you were innocent and the contrast it made between yourself and your opponent, that nobody will think to escalate against you in future the next time somebody steals your food.
That’s why it’s so important to know how to lose—so that you can win! Just like how the best things you could do to decrease your personal risk from fights are things like practising conflict defusion techniques, learning how to walk away from conflict, being less tempestuous, being situationally aware, or even just learning how to play dead/fake a seizure/panic attack, rather than something that just looks like winning, like practising flashy kicks.
Did the whole thing. Cheers to all involved. :)
Donated $50.
Thanks to Luke for his exceptional stewardship during his tenure! You’ll be awesome at GiveWell!
And Nate you’re amazing for taking a level and stepping up to the plate in such a short period of time. It always sounded to me like Luke’s shoes would be hard for a successor to fill, but seeing him hand over to you I mysteriously find that worry is distinctly absent! :)
I always enjoy your contributions and it makes me sad that you use the site less than you would because of a/the downvote stalker.* I’m interested in applying microeconomics to these sorts of domestic or between-friends situations in which it might conventionally be taboo to do so, and this type of bidding in particular is something I’ve wondered about, so thanks for posting this!
*(I am not sure how easy it would be for an admin to look at the database and figure out what to do once someone’s been identified as a downvote stalker, but I’m skeptical it would not be worthwhile at least trying to sort it out given how destructive it is, and at the very least I’d like to know why it seems nothing’s been done. (Not to assign blame, but to try to figure out how we can get faster response on such issues in future.) If there really is one or a small number of people behind most of the downvote stalking, then checking the downvotes against victims should take (<)<15 minutes. I’d put >80% probability on this turning up one or more stalkers even just doing a naive database query using the set of victims in this thread.)
I sometimes find that telling my Inner Lazy that it can decide—after I’ve done the first one—between whether to continue a series of tasks or to stop and be Lazy gets me to do the whole series of tasks. Despite having noticed explicitly that in practice this ‘decision delay strategy’ leads to the whole series getting done, it still works, and rather seems like tricking my Inner Lazy to transition into/hand the reins over to into my Inner Agent.
I am really very pleasantly surprised with how this comment tree turned out and these are useful warnings. The level of internal insight was higher than I would have expected even if our first two comments hadn’t been vaguely confrontational. Thank you!
Yes. If we change “We shouldn’t eat chickens because they are conscious” to “We shouldn’t eat chickens because they want to not be eaten,” then this becomes another example where, once we cashed out what was meant, the term ‘consciousness’ could be circumvented entirely and be replaced with a less philosophically murky concept. In this particular case, how clear the concept of ‘wanting’ (as relating to chickens) is, might be disputed, but it seems like clearly a lesser mystery or lack of clarity than the monolith of ‘consciousness’.
Hi, jackal_esq. As someone involved in criminal justice, you might find the following interesting, if you haven’t seen them already:
Evidence under Bayes theorem, Wikipedia
R v Adams, Wikipedia
Sally Clark, Wikipedia
Amanda Knox case, Less Wrong (followup post linked at bottom)
A formula for justice, Guardian
Bayesian analysis under threat in British courts, Less WrongAside from that, welcome to Less Wrong!
Begin here and read up to part 5 inclusive. On the margin, getting a basic day-in, day-out wardrobe of nice well-fitting jeans/chinos (maybe chino or cargo shorts if you live in a hot place) and t-shirts is far more valuable when you start approaching fashion than hats. Hats are a flair that come after everything else in the outfit you’re wearing them with. Maybe you want to just spend a few hours one-off choosing a hat and don’t want to think about all the precursors. But that can actually make you backslide. If you look at their advice about hats, you’ll see that pork pies and fedoras are recommended, but it’s well-known how badly a fedora can backfire if you aren’t very careful.
(For example, I’m still in the ‘trying new t-shirts/shirts/jeans/chinos/shoes with an occasional jumper purchase’ phase after about a year, 18 months. Still haven’t even got to shorts. You might progress faster if you do shops more often or have a higher shopping budget. But suffice to say hats are a long way in.)
There is a known phenomenon of guys walking around with a fedora or brimmed hat or whatever with a poorly coordinated outfit, dirty clothes, odour, bad fit, etc. basically not having the basics down before going intermediate. In these cases you will lose points with a lot of people because they will cringe or think you’re trying to compensate. You may or may not have been engaging in similar thinking when making this thread, but watch out for that failure mode.
Supplementary reading and good to get a yay or nay before buying something, or to get recommendations within a type of garment: /r/malefashionadvice/
Fashionability and going for safety helmets/caps might be divergent strategies though. If you were purely optimizing the former, what I say above might be relevant. If the latter, just getting some Crasches and calling it a day might be enough.
This premise sounds interesting, but I feel like concrete examples would really help me be sure I understand
Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.
Rundown of positive and negative results:
In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it’s the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with you, but I learned to love it.
I suspect that learning the dates of monarchs and Prime Ministers (e.g. of England/UK) would have a similar benefit in contextualising and de-intimidating historical facts, but I never finished those decks and haven’t touched them in a while, so never reached the critical mass of knowledge that allowed me to have a good handle on periods of British history. I found it pretty difficult to (for example) keep track of six different Georges and map each to dates, so slow progress put me off. Let me know if you’re interested and want to set up a pact, e.g. ‘We’ll both do at least ten cards from each deck a day and report back to the other regularly’ or something. In fact that offer probably stands for any readers.
I installed some decks for learning definitions in areas of math that I didn’t know, but found memorising decontextualised definitions hard enough that I wasn’t motivated to do it, given everything else I was doing and Anki-ing at the time. I still think repeat exposure to definitions might be a useful developmental strategy for math that nobody seems to be using deliberately and systematically, but I’m not sure Anki is a right way to do it. Or if it is, that shooting so far ahead of my current knowledge was the best way to do it. Similarly a LaTeX deck I got having pretty much never used LaTeX and not practising it while learning the deck.
Canadian provinces/territories I have not yet found useful beyond feeling good for ticking off learning the deck, which was enough for me since I did them in a session or two.
Languages Spoken in Each Country of the World (I was trying to do not just country-->languages but country-->languages with proportions of population speaking the languages) was so difficult and unrewarding in the short term that I lost motivation extremely quickly (this was months ago). The mental association between ‘Berber’ and ‘North Africa’ has come up a surprising number of times, though. Most recently yesterday night.
Periodic table (symbol<--->name, name<-->number) took lots of time and hasn’t been very useful for me personally (I pretty much just learned it in preparation for a quiz). Learning just which elements are in which groups/sections of the Periodic table might be more useful and a lot quicker (since by far the main difficulty was name<--->number).
I am relatively often wanting for demographic and economic data, e.g. population of countries, population of major world cities, population of UK places, GDP’s. Ideally I’d not just do this for major places since I want to get a good intuitive sense of these figures for very large or major places on down to tiny places.
Similarly if one has a hobby horse it could be useful. Examples off the top of my head (not necessarily my hobby horse): Memorising the results from the LessWrong surveys. Memorising the results from the PhilPapers survey. Memorising data about resource costs of meat production vs. other food production. Memorising failed AGI timeline predictions. Etc.
I found starting to learn Booker Prize winners on Memrise has let me have a few ‘Ah, I recognise that name and literature seems less opaque to me, yay!’ moments, but there’s probably higher-priority decks for you to learn unless that’s more your area.
Introduction I suspected that the type of stuff that gets posted in Rationality Quotes reinforces the mistaken way of throwing about the word rational. To test this, I set out to look at the first twenty rationality quotes in the most recent RQ thread. In the end I only looked at the first ten because it was taking more time and energy than would permit me to continue past that. (I’d only seen one of them before, namely the one that prompted me to make this comment.)
A look at the quotes
In our large, anonymous society, it’s easy to forget moral and reputational pressures and concentrate on legal pressure and security systems. This is a mistake; even though our informal social pressures fade into the background, they’re still responsible for most of the cooperation in society.
There might be an intended, implicit lesson here that would systematically improve thinking, but without more concrete examples and elaboration (I’m not sure what the exact mistake being pointed to is), we’re left guessing what it might be. In cases like this where it’s not clear, it’s best to point out explicitly what the general habit of thought (cognitive algorithm) is that should be corrected, and how one should correct it, rather than to point in the vague direction of something highly specific going wrong.
As the world becomes more addictive, the two senses in which one can live a normal life will be driven ever further apart. One sense of “normal” is statistically normal: what everyone else does. The other is the sense we mean when we talk about the normal operating range of a piece of machinery: what works best.
These two senses are already quite far apart. Already someone trying to live well would seem eccentrically abstemious in most of the US. That phenomenon is only going to become more pronounced. You can probably take it as a rule of thumb from now on that if people don’t think you’re weird, you’re living badly.
Without context, I’m struggling to understand the meaning of this quote, too. The Paul Graham article it appears in, after a quick skim, does not appear to be teaching a general lesson about how to think; rather it appears to be making a specific observation. I don’t feel like I’ve learned about a bad cognitive habit I had by reading this, or been taught a new useful way to think.
If you’re expecting the world to be fair with you because you are fair, you are fooling yourself. That’s like expecting a lion not to eat you because you didn’t eat him.
Although this again seems like it’s vague enough that the range of possible interpretations is fairly broad, I feel like this is interpretable into useful advice. It doesn’t make a clear point about habits of thought, though, and I had to consciously try to make up a plausible general lesson for it (just world fallacy), that I probably wouldn’t have been able to think up if I didn’t already know that general lesson.
He says we could learn a lot from primitive tribes. But they could learn a lot more from us!
I understand and like this quote. It feels like this quote is an antidote to a specific type of thought (patronising signalling of reverence for the wisdom of primitive tribes), and maybe more generally serves as an encouragement to revisit some of our cultural relativism/self-flagellation. But probably not very generalisable. (I note with amusement how unconvincing I find the cognitive process that generated this quote.)
Procrastination is the thief of compound interest.
There can be value to creating witty mottos for our endeavours (e.g. battling akrasia). But such battles aside, this does not feel like it’s offering much insight into cognitive processes.
Allow me to express now, once and for all, my deep respect for the work of the experimenter and for his fight to wring significant facts from an inflexible Nature, who says so distinctly “No” and so indistinctly “Yes” to our theories.
If I’m interpreting this correctly, then this can be taken as a quote about the difficulty of locating strong hypotheses. Not particularly epiphanic by Less Wrong standards, but it is clearer than some of the previous examples and does indeed allude to a general protocol.
[A]lmost no innovative programs work, in the sense of reliably demonstrating benefits in excess of costs in replicated RCTs [randomized controlled trials]. Only about 10 percent of new social programs in fields like education, criminology and social welfare demonstrate statistically significant benefits in RCTs. When presented with an intelligent-sounding program endorsed by experts in the topic, our rational Bayesian prior ought to be “It is very likely that this program would fail to demonstrate improvement versus current practice if I tested it.”
In other words, discovering program improvements that really work is extremely hard. We labor in the dark—scratching and clawing for tiny scraps of causal insight.
Pretty good. General lesson: Without causal insight, we should be suspicious when a string of Promising Solutions fails. Applicable to solutions to problems in one’s personal life. Observing an an analogue in tackling mathematical or philosophical problems, this suggests a general attitude to problem-solving of being suspicious of guessing solutions instead of striving for insight.
The use with children of experimental [educational] methods, that is, methods that have not been finally assessed and found effective, might seem difficult to justify. Yet the traditional methods we use in the classroom every day have exactly this characteristic—they are highly experimental in that we know very little about their educational efficacy in comparison with alternative methods. There is widespread cynicism among students and even among practiced teachers about the effectiveness of lecturing or repetitive drill (which we would distinguish from carefully designed practice), yet these methods are in widespread use. Equally troublesome, new “theories” of education are introduced into schools every day (without labeling them as experiments) on the basis of their philosophical or common-sense plausibility but without genuine empirical support. We should make a larger place for responsible experimentation that draws on the available knowledge—it deserves at least as large a place as we now provide for faddish, unsystematic and unassessed informal “experiments” or educational “reforms.”
Good. General lesson: Apply reversal tests to complaints against novel approaches, to combat status quo bias.
The general principle of antifragility, it is much better to do things you cannot explain than explain things you cannot do.
Dual of quote before previous. At first I thought I understood this immediately. Then I noticed I was confused and had to remind myself what Taleb’s antifragility concept actually is. I feel like it’s something to do with doing that which works, regardless of whether we have a good understanding of why it works. I could guess at but am not sure of what the ‘explain things you cannot do’ part means.
“He keeps saying, you can run, but you can’t hide. Since when do we take advice from this guy?”
You got a really good point there, Rick. I mean, if the truth was that we could hide, it’s not like he would just give us that information.
Trope deconstruction making a nod to likelihood ratios. Could be taken as a general reminder to be alert to likelihood ratios and incentives to lie. Cool.
Conclusion Out of ten quotes, I would identify two as reinforcing general but basic principles of thought (hypothesis location, likelihood ratios), another that is useful and general (skepticism of Promising Solutions), one which is insightful and general (reversal tests for status quo biases), and one that I wasn’t convinced I really grokked but which possibly taught a general lesson (antifragility).
I would call that maybe a score of 2.5 out of 10, in terms of quotes that might actually encourage improvement in general cognitive algorithms. I would therefore suggest something like one of the following:
(1) Be more rigorous in checking that quotes really are rationality quotes before posting them (2) Having two separate threads—one for rationality quotes and one for other quotes (3) Renaming ‘Rationality Quotes’ to ‘Quotes’ and just having the one thread. This might seem trivial but it at least decreases the association of non-rationality quotes to the concept of rationality.
I would also suggest that quote posters provide longer quotes to provide context or write the context themselves, and explain the lesson behind the quotes. Some of the above quotes seemed obvious at first, but I mysteriously found that when I tried to formulate them crisply, I found them hard to pin down.
I suspect that as with site modifications, those of us suggesting ways to find downvote stalkers would do best to figure out how LW works and do as much of the work as possible ourselves. So in this case, that’d probably mean downloading LW source code, figuring out the database structure, thinking of approaches to finding downvote stalkers, formalising them as database queries, then trying to get someone with database access to security check then run those queries. I suspect this because from what I gather Eliezer and those with database access (e.g. presumably Trike) tend to be busy enough or doing important enough other things that they are not willing to or it is not worth their time to do all this themselves, so we should do as much of it as possible to make things quicker for them.
Small amount of money to mouth: I did read through some of the webpages surrounding LW’s source code, downloaded it, and spent a little time trying to figure out how the site and database work. But by the time I got to the point of looking at the code, I had little enough temporary motivation left and the relation of the scripts to each other and the difficulty of figuring out where to start was enough that I didn’t get very far before I burned out for that night and haven’t looked again since. :z
A guide to (learning) LW’s code and database (even if just a few paragraphs along the lines of ‘Start by looking at the main article display script, then move on to...’ or commenting the scripts or something) might be higher leverage at this point with respect to improving the site than submitting small code improvements, since it might encourage several others to submit improvements. On the other hand, part of me suspects that the set of people held back just by that might actually be quite small (polarisation of would-be contributors into hardcore and indifferent with few in the middle—‘if they were going to do it, they would have done it by now’).
Given the distribution of coding ability here, it certainly seems ridiculous how slow stuff like this gets done, and I think it’s due to trivial inconveniences, ugh fields, etc., of which figuring out the site and how to submit code etc. is possibly a large part.
Since Eliezer’s response, I have slightly decreased my distribution over the level of downvote stalking, but there is still way too much evidence for me to honestly believe that there aren’t any downvote stalkers; it would take at least an explanation of exactly what had been tried and possibly significant knowledge of database structure to convince me it’s not happening at this point. So at present I defy the data.
The basic idea of getting cryonics is that it offers a chance of massively extended lifespan, because there is a chance that it preserves one’s identity. That’s the first-run approximation, with additional considerations arising from making this reasoning a bit more rigorous, e.g. that cryonics is competitive against other interventions, that the chance is not metaphysically tiny, etc.
One thing we might make more rigorous is what we mean by ‘preservation’. Well, preservation refers to reliably being able to retrieve the person from the hopefully-preserved state, which requires that the hopefully-preserved state cannot have arisen from many non-matching states undergoing the process.
The process that squares positive numbers preserves perfectly (is an injection), because you can always in theory tell me the original number if I give you its square. The process that squares real numbers preserves imperfectly but respectably since, for any positive output, that output could have come from two numbers (e.g. 1^2=1=(-1)^2). Moreover, if we only cared about the magnitude (modulus, i.e. ignoring the sign) of the input, even squaring over real numbers would perfectly preserve what we cared about.
Similarly, there is a chance that the hopefully-preserved states generated by cryonics do/will be generated only by the original identity, or possibly some acceptably close identities. That we do not currently know if it is possible to retrieve acceptably close identities from hopefully-preserved states—or even if we did, how one would do so—does not necessarily make the probability that it is possible to do so in principle low enough that cryonics can be laughed off.
A monkey might be bamboozled by the sequence of square numbers written in Arabic numerals, but that would not prove that the rule could not be deduced in principle, or that information had been lost for human purposes. Similarly we might currently be unable to reverse vitrification or look under a microscope and retrieve the identity, but it is unfair to demand this level of proof, and it is annoying and frustrating in the same way as logical rudeness (even if technically it is not logically rude) when every few months another person smugly spouts this type of argument as a ‘refutation’ of cryonics and writes cryonicists off, and then gets upvoted handsomely. (Hence Eliezer losing patience and outright declaring that people who don’t seem to (effectively) understand this point about mappings don’t have a clue.)
Formalisations of these concepts arise in more obviously mathematical contexts like the study of functions and information theory, but it feels like neither of those should be necessary background for a smart person to understand the basic idea. But in all honesty, I think the inferential gap for someone who has not explicitly considered at least the idea of injections before is big enough that often people apply the absurdity heuristic or become scared to do something unconventional before the time it takes to cross that inferential gap.
I think there’s a good chance that there are neurodegenerative conditions that are currently irreversible but which many more would think worth working on than cryonics, simply because they associate cryonics with ‘computer nerd failure mode’ or apply the absurdity heuristic or because attacking neurodegenrative conditions is Endorsed by Experts whereas cryonics is not or because RationalWiki will laugh at them. Possible partial explanation: social anxiety that mockery will ensure for trying something not explicitly endorsed by an Expert consensus (which is a realistic fear, given how many people basically laugh at cryonicists or superficially write it off as ‘bullshit’). And yes, in this mad world, social anxiety really might be the decisive factor for actual humans in whether to pursue an intervention that could possibly grant them orders of magnitude more lifespan.
You need to clarify your intentions/success criteria. :) Here’s my What Actually Happened technique to the rescue:
(a) You argued with some (they seem) conventional philosophers on various matters of epistemology.
(b) You asked LessWrong-type philosophers (presumably having little overlap with the aforementioned conventional philosophers) how to do epistemology.
(c) You outlined some of the conventional philosophy arguments on the aforementioned epistemological matters.
(d) You asked for neuroscience pointers to be able to contribute intelligently.
(e) Most of the responses here used LessWrong philosophy counterarguments against arguments you outlined.
(f) You gave possible conventional philosophy countercounterarguments.This is largely a failure of communication because the counterarguers here are playing the game of LessWrong philosophy, while you’ve played, in response, the game of conventional philosophy, and the games have very different win conditions that lead you to play past each other. From skimming over the thread, I am as usual most inclined to agree with Eliezer: Epistemology is a domain of philosophy, but conventional philosophers are mostly not the best at—or necessarily the people to go to in order to apprehend—epistemology. However, I realise this is partly a cached response in myself: Wanting to befriend your coursemates and curry favour from teachers isn’t an invalid goal, and I’d suspect that in that case you wouldn’t be best be served by ditching them. Not entirely, anyway...
Based on your post and its language, I identify at least the three following subqueries that inform your query:
(i) How can I win at conventional philosophy?
(ii) How can I win by my own argumentative criteria?
(iii) How can I convince the conventional philosophers?Varying the balance of these subqueries greatly affects the best course of action.
If (i) dominates, you need to get good at playing the (language?) game of the other conventional philosophers. If their rules are anything like in my past fights with conventional philosophers, this largely means becoming a beast of the ‘relevant literature’ so that you can straightblast your opponents with rhetoric, jargon, namedropping, and citations until they’re unable to fight back (if you get good enough, you will be able to consistently score first-round knockouts), or so that your depth in the chain of counter^n-arguments bottoms them out and you win by sheer attrition in argumentdropping, even if you take a lot of hits.
If (ii) dominates, you need to identify what will make you feel like you’ve won. If this is anything like me in my past fights with conventional philosophers, this largely means convincing yourself that while what they say is correct, their skepticism is overwrought and serves little purpose, and that you are superior for being ‘useful’.
If (iii) dominates, the approach depends upon of what you’re trying to convince them. For example, whether the position of which you’re trying to convince them is mainstream or contrarian completely changes your argumentative approach.
In the case of (d), the nature of the requested information is actually relatively clear, but the question arises of what you intend to do with it. Is it to guide your own thinking, or mostly to score points from the other philosophers for your knowledge, or...? If it’s for anything other than improving your own arguments by your own standards, I would suggest (though of course you have more information about the philosophers in question) that you reconsider how much of a difference it will make; a lot of philosophers at best ignore and at worst disdain relevant information when it is raised against their positions, so the intuition that relevant information is useful for scoring points might be misguided.
Where you speak of seeing yourself shifting/having shifted and moving away from an old position (foundationalism) or towards a new one (coherentism) and describing your preference for foundationalism as irrational, it seems like you probably should just go ahead and disavow foundationalism. Or at least, it would if I were confident such affiliations were useful; I’m not. See conservation of expected evidence.
Upvoted. I really like the explanation.
In the spirit of Don’t Explain Falsehoods, it would be nice to test the ubiquity of this phenomenon by specifying a measure of this phenomenon (e.g. correlation) on some representative randomly-chosen pairs. But I don’t mean to suggest that you should have done that before posting this.
- 2 Aug 2014 15:17 UTC; 0 points) 's comment on Terminology Thread (or “name that pattern”) by (
If I’m understanding your comment correctly, I strongly disagree with this way of framing such suggestions. It seems anathema to the rationalist enterprise. Many rationalist simplifications of or modifications to (social) interaction, or other not-strictly-rationalist approaches that are regardless endorsed by us, are hit by your argument. E.g. requesting tabooing words, requesting predictions of differing anticipated experiences, Crocker’s rules, confessing noticing confusion, etc. etc. on through the Sequences et al.
A core of the rationalist ideal is to take approaches that promote the discovery, recognition, and sharing of truth except where there are situational reasons to hold off on doing so in those specific cases. For example, I agree with warnings that have been raised in the comments on this post about trying Telling without a cooperating or rationalist receiver. But that’s in the same way that asking a Muggle to taboo their words can be a not-so-great idea.
I suspect that high-profile Bay Area (and possibly New York?) rationalists would bear this out. As a specific example, as far as I can tell, Alicorn seems to be the rationalist master of Telling and generally avoiding beating about the bush when she wants something, and wins because of it. More generally, from what I gather as a spectator, there seem to be a lot of techniques or behaviours on instrumental, emotional, and interpersonal fronts that are making the Bay Area awesome and an ever-stronger attractor to rationalists around the world, but which the broader rationalist/LW community does not necessarily hear about.
The success of the Bay Area subcommunity’s approach seems somewhat unknown. And I think that means that when someone comes along from there and says to the broader community, ‘Hey, we should try Telling more,’ there is a lot of cultural context (of the Bay Area generally, and all the interrelations with communication systems, openness, etc.), experience, and success underlying that suggestion that is not visible. I think if enough commenters adopted this approach, it would becomes recognised, not be misinterpreted, and work. Now Brienne’s posted this, possibly even people can link to this post to try to prevent being misinterpreted when they are Telling on LW.
A lot of the Bay Area’s success seems to come from people taking simplifying approaches to communication seriously and cooperating. When you say
It’s also not so obvious that you can effectively change conventions like these by just starting in and asking others to change. If you tried your “developing trust” tactic with me, I’d probably play along to avoid conflict on one occasion, and avoid YOU after that.
that pretty much feels like the complete opposite, i.e. writing off the suggestion and anyone who takes it seriously. I’m not sure if I’d call it defection, but it has a similar feel. On a collective level, both the receptive and the skeptical attitudes are self-fulfilling, because these kinds of things really do seem to work when enough people take them seriously, and will certainly fail if everyone scorns them. (E.g. look at how many memes from the Sequences are pretty much unanimously taken seriously.)
(I acknowledge that I might have completely misread your comment.)
I’ve been ever-more-excitedly watching you post your training and head off to workshop over these past few months. I teared up a little when I got to that standalone sentence, “On Saturday I was invited to become a MIRI research associate,” because now I know your origin story, I understand how much that invitation must have meant to you.
I haven’t really felt qualified to comment on many of your other posts (sometimes the level of the material, sometimes feeling too shy to commend your efforts), so I shall say now:
Thank you. We’re rooting for you. Keep on saving the world!
‘It goes without saying’ that I’m hella looking forward to your next posts.
Salute