Just want to echo: thanks for doing this. This is awesome.
shev
Infinite Summations: A Rationality Litmus Test
0.999...=1: Another Rationality Litmus Test
The assumed opinions I’m talking about are not the substance of your argument; they’re things like “I think that most of these reactions are not only stupid, but they also show that American liberals inhabit a parallel universe”, and what is implied in the use of phrases like ‘completely hysterical’, ‘ridiculous’, ‘nonsensical’, ‘proposterous’, ‘deranged’, ‘which any moron could have done’, ‘basically a religion’, ‘disconnected from reality’, ‘save the pillar of their faith’, etc. You’re clearly not interested in discussion of your condemnation of liberals, and certainly not rational discussion. You take it as an obvious fact that they are all stupid, deranged morons.
So when you write “I’m also under no delusion that my post is going to have any effect on most of those who weren’t already convinced”, I think you are confused. People who don’t already agree with you won’t be convinced because you obviously disdain them and are writing with obviously massive bias against them. Not because their opinions are “basically a religion, which no amount of evidence can possibly undermine.”
I think your post would be much stronger if you just removed all your liberal-bashing entirely, quoted an example of someone saying hate crimes had gone up since trump’s election, and then did the research. I’m totally opposed to polemics because I think they have no good results. Especially the kind that is entirely pandering to one side and giving the finger to the other. (I also think you’re wildly incorrect about your understanding of liberals, as revealed by some of your weird stereotypes, but this is not the place to try to convince you otherwise.) But I guess if that’s the way people write in a certain community and you’re writing for that community, you may as well join in. I prefer to categorically avoid communities that communicate like that—I’ve never found anything like rational discussion in one.
I also think such obvious bias makes your writing weaker even for people on your side. It’s hard to take writing seriously that is clearly motivated by such an agenda and is clearly trying to get you to rally with it in your contempt for a common enemy.
But if you would spend 2500$ over ten years of glasses- and contacts-wearing—which is very possible, especially if you’re prone to breaking them—then it pays for itself already. Or twenty years, whatever, ignoring alternative ways to invest that money. Add in more for the massive convenience of not having to deal with glasses and contacts, too.
This is why I’m going in for a LASIK pre-op next week. I’m certain it will improve my quality of life appreciably and save me money over the long term to boot.
I think you’ve subtly misinterpreted each of the virtues (not that I think in terms the twelve-virtue list is special; they’re just twelve good aspects of rational thought).
The virtues apply to your mental process for parsing and making predictions about the world. They don’t exactly match the real-world usages of these terms.
Consider these in the context of winning a game. Let’s talk about a real-world game with social elements, to make it harder, rather than something like chess. How about “Suppose you’re a small business owner. How do you beat the competition?”
1) Curiosity: refers to the fact that you should be willing to consider new theories, or theories at all instead of intuition. You’re willing to consider, say, that “customers return more often if you make a point to be more polite”. The arational business owner might lose because they think they treat people perfectly fine, and don’t consider changing their behavior.
2-4) Relinquishment/lightness/evenness refers to letting your beliefs be swayed by the evidence, without personal bias. In your example: seeing a woman appear to be cut in half absolutely does not cause you to think she’s actually cut in half. That theory remains highly unlikely. But it does mean that you have to reject theories that don’t allow the appearance of that, and go looking for a more likely explanation. (If you inspect the whole system in detail and come up with nothing, maybe she was actually cut in half! But extraordinary claims require extraordinary evidence, so you better ask everyone you know, and leave some other very extreme theories (such as ‘it’s all CGI’) as valid, as well.)
In my example, the rational business-owner acts more polite to see if it helps retain customers, and correctly (read: mathematically or pseudo-mathematically) interprets the results, being convinced only if they are truly convincing, and unconvinced if they are truly not. The arational business owner doesn’t check, or does and massages the results to fit what they wanted to see, or ignores the results, or disbelieves the results because they don’t match their expectations. And loses.
5) Argument—if you don’t believe that changing your behavior retains customers, and your business partner or employee says they do, do you listen? What if they make a compelling case? The arational owner ignores them, still trusting their own intuition. The rational owner pays attention and is willing to be convinced—or convince them of the opposite, if there’s evidence enough to do so. Argument is on the list because it’s how two fallible but truth-seeking parties find common truth and check reasoning. Not because arguing is just Generally A Good Idea. It’s often not.
6) Empiricism—this is about debating results, not words. It’s not about collecting data. Collecting data might be a good play, or it might not. Depends on the situation. But it’s still in the scope of rationalism to evaluate whether it is or not.
7) Simplicity—this doesn’t mean “pick simple strategies in life”. This means “prefer simple explanations over complex ones”. If you lower your prices and it’s a Monday and you get more sales, you prefer the lower prices explanation over “people buy more on Mondays” because it’s simpler—it doesn’t assume invisible, weird forces; it makes more sense without a more complex model of the world. But you can always pursue the conclusion further if you need to. It could still be wrong.
8) Humility—refers to being internally willing to be fallible. Not to the social trait of humility. Your rational decision making can be humble even if you come across, socially, as the least humble person anyone knows. The humble business owner realizes they’ve made a mistake with a new policy and reverses it because not doing so is a worse play. The arational business owner keeps going when the evidence is against them because they still trust their initial calculation when later evidence disagrees.
9-10) Perfectionism/Precision: if it is true that in social games you don’t need to be perfect, just better than others, then “perfect play” is maximizing P(your score is better than others), not maximizing E(your score). You can always try to play better, but you have to play the right game.
And if committing N resources to something gets a good chance of winning, while committing N+1 gets a better chance but has negative effects on your life in other ways (say, your mental health), then it can be the right play to commit only N. Perfect and precise play is about the larger game of your life, not the current game. The best play in the current game might be imperfect and imprecise, and that’s fine.
11) Scholarship—certainly it doesn’t always make sense when weighed against other things. Until it does. The person on the poverty line who learns more when they have time gains powers the others don’t have. It may unlock doors out that others can’t access. As with everything else, it must be weighed against the other exigencies of their life.
(Also, by the way, I’m not sure what your title means. Maybe rephrase it?)
While I think it’s fine to call someone out by name if nothing else is working, I think the way you’re doing it is unnecessarily antagonistic and seemingly intentionally spiteful or at least utterly un-empathetic, and what you’re doing can (and in my opinion ought to) be done empathetically, for cohesion and not hurting people excessively and whatnot.
Giving an excuse about why it’s okay that you, specifically, are doing it, and declaring that you’re “naming and shaming” on purpose, makes it worse. It’s already shaming the person without saying that you’re very aware that it is; you ought to be taking a “I’m sorry I have to do this” tone instead of a “I’m immune to repercussions, so I’m gonna make sure this stings extra!” tone.
At least, this is how it would work in the several relatively typical (American) social groups that I’m familiar with.
One general suggestion to everyone: upvote more.
It feels a lot more fun to be involved in this kind of community when participating is rewarded. I think we’d benefit by upvoting good posts and comments a lot more often (based on the “do I want this around?” metric, not the “do I agree with this poster” metric). I know that personally, if I got 10-20 upvotes on a decent post or comment, I’d be a lot more motivated to put more time in to make a good one.
I think the appropriate behavior is, when reading a comment thread, to upvote almost every comment unless you’re not sure it’s positive keeping it around—then downvote if you’re sure it’s bad, or don’t touch it if you’re ambivalent. Or, alternatively: upvote comments you think someone else would be glad to have read (most of them), don’t touch comments that are just “I agree” without meat, and downvote comments that don’t belong or are poorly crafted.
This has the useful property of being an almost zero effort expenditure for the users that (I suspect) would have a larger effect if implemented collectively.
I found that idea so intriguing I made an account.
Have you considered that such a causal graph can be rearranged while preserving the arrows? I’m inclined to say, for example, that by moving your node E to be on the same level—simultaneous with—B and C, and squishing D into the middle, you’ve done something akin to taking a Lorentz transform?
I would go further to say that the act of choosing a “cut” of a discrete causal graph—and we assume that B, C, and D share some common ancestor to prevent completely arranging things—corresponds to the act of the choosing a reference frame in Minkowski space. Which makes me wonder if max-flow algorithms have a continuous generalization.
edit: in fact, max-flows might be related to Lagrangians. See this.
Hi, I’m Alex.
Every once in a while I come to LessWrong because I want to read more interesting things and have more interesting discussions on the Internet. I’ve found it a lot easier to spend time on Reddit (having removed all the drivel) and dredging through Quora to find actually insightful content (seriously, do they have any sort of actual organization system for me to find reading material?) in the past. LessWrong’s discussions have seemed slightly inaccessible, so maybe posting an introduction like I’m supposed to will set in motion my figuring out how this community works.
I’m interested in a lot of things here, but especially physics and mathematics. I would use the word “metaphysics” but it’s been appropriated for a lot of things that aren’t actually meta-physics like I mean. Maybe I want “meta-mathematics”? Anyway, I’m really keen on the theory behind physical laws and on attempts at reformulating math and physics into more lucid and intuitive systems. Some of my reading material (I won’t say research, but … maybe I should say research) recently has been on geometric algebra, re-axiomizing set theory, foundations and interpretations of quantum mechanics, reformulations of relativity, quantum field theory’s interpretation, things like that. I have a permanent distaste for spinors and all the math we don’t try to justify with intuition when teaching physics, so I’ve spent a lot of my last few years studying those.
I was really intrigued by the articles/blog posts? on what proofs actually mean and causality a few months ago; that’s when I started reading the site. I’ve spent the better part of the last year sifting through all kinds of math ideas related to reinterpretations or ‘fundamental’ insights, so I hope hanging around here can expose me to some more.
Oh, and I’ve spent a good amount of time on the Internet refuting crackpots who think they solved physics, so I, um, promise I’m not one.
I’m a programmer by trade and have a good interest in revolutionary (or just convenient) software projects and disruptive ideas and really naive, idealist world-changing ideas, which is fun.
I have read some of the sequences and such but—I guess I’m a rationalist at heart already, maybe because I’ve studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other than atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it’s possible I’ve missed a lot of the material here—I find navigating the site to be pretty unintuitive.
I’m based in Seattle and I hope to go to the meetups if they… ever happen again. I mostly just like talking to smart people; I find it makes my brain work better—as if there’s some sort of ‘conversation mode’ which hypercharges my creativity.
Oh, and I have a blog: http://ajkjk.com/blog/. I’m slightly terrified of linking it; it’s the first time I’ve shown it to anyone but friends. It only has 6 posts so far. I’ve written a lot more but deleted/hid them until they’re cleaned up.
What would you like to see posts about?
Thanks! Validation really, really helps with making more. I hope to, though I’m not sure I can churn them out that quickly since I have to wait for an idea to come along.
I disagree. The point is that most comments are comments we want to have around, and so we should encourage them. I know that personally I’m unmotivated to comment, and especially to put more than a couple minutes of work into a comment, because I get the impression that no one cares if I do or not.
Here’s an opinion on this that I haven’t seen voiced yet:
I have trouble being excited about the ‘rationalist community’ because it turns out it’s actually the “AI doomsday cult”, and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk—like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists—I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don’t so I don’t feel like a member.
(I’m not saying that AI stuff shouldn’t be discussed. I’d like it to dominate the discussion a lot less.)
I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder’s pet obsession.
It doesn’t count in the discussions of coloring graphs, such as in the four color map theorem, and that’s the kind of math this is most similar to. So you really need to specify.
Interleaving isn’t really the right way of getting consistent results for summations. Formal methods like Cesaro Summation are the better way of doing things, and give the result 1⁄2 for that series. There’s a pretty good overview on this wiki article about summing 1-2+3-4.. .
This reminds me of an effect I’ve noticed a few times:
I observe that in debates, having two (or more) arguments for your case is usually less effective than having one.
For example, if you’re trying to convince someone (for some reason) that “yes, global warming is real”, you might have two arguments that seem good to you:
scientists almost universally agree that it is real
the graphs of global temperature show very clearly that it is real
But if you actually cite both of these arguments, you start to sound weaker than if you picked one and stuck with it.
With one argument your stance is “look, this is the argument. you either need to accept this argument or show why it doesn’t work—seriously, I’m not letting you get passed this”. And if they find a loophole in your argument (maybe they find a way to believe the data is totally wrong, or something), then you can bust out another argument.
But when you present two arguments at once, it sounds like you’re just fishing for arguments. You’re one of those people who’s got a laundry list of reasons for their side, which is something that everyone on both sides always has (weirdly enough), and your stance has become “look how many arguments there are” instead of “look HOW CONVINCING these arguments are”. So you become easier to disbelieve.
As it happens, there are many good arguments for the same point, in many cases. That’s a common feature of Things That Are True—their truth can be reached in many different ways. But as a person arguing with a human, in a social setting, you often get a lot more mileage out of insisting they fight against one good argument instead of just overwhelming them with how many arguments you’ve got.
The weak arguments mentioned in the linked article multiply this effect considerably. In my mind there’s like, two obvious arguments against theism that you should sit on and not waver from: “What causes you you to think this is correct (over anything else, or just over ‘we don’t know’)” and, if they cite their personal experience / mental phenomenon of religious feelings, “Why do you believe your mental feelings have weight when human minds are so notoriously unreliable?”
Arguments about Jesus’ existence are totally counterproductive—they can only weaken your state, since, after all, who would be convinced by that that wasn’t already convinced by one of the strong arguments?
Is there an index of everything I ought to read to be ‘up-to-date’ in the rationalist community? I keep finding new stuff: new ancient LW posts, new bloggers, etc. There’s also this on the Wiki, which is useful (but is curiously not what you find when you click on ‘all pages’ on the wiki; that instead gets a page with 3 articles on it?). But I think that list is probably more than I want—a lot of it is filler/fluff (though I plan to at least skim everything, if I don’t burn out).
I just want to be able to make sure, if I try to post something I think is new on here, that it hasn’t been talked to death about already.
I strongly disagree with the approaches usually recommended online, which involve some mixture of sites like CodeAcademy and looking into open source projects and lots of other hard-to-motivate things. Maybe my brain works differently, but those never appealed to me. I can’t do book learning and I can’t make myself up and dedicate to something I’m not drawn to already. If you’re similar, try this instead:
Pick a thing that you have no idea how to make.
Try to make it.
Now, when I say “try”… new programmers often envision just sitting down and writing, but when they try it they realize they have no idea how to do anything. Their mistake is that, actually, sitting down and knowing what to do is just not what coding is like. I always surprise people who are learning to code with this fact: when I’m writing code in any language other than my main ones (Java, mostly..), I google something approximately once every two minutes. I spend most of my time searching for how to do even the most basic things. When it’s time to actually make something work, it’s usually just a few minutes of coding after much more time spent learning.
You should try to make the “minimum viable product” of whatever you want to make first.
If it’s a game, get a screen showing—try to do it in less than an hour. Don’t get sidetracked by anything else; get the screen up. Then get a character moving with arrow keys. Don’t touch anything until you have a baseline you can iterate on, because every change you make should be immediately reflected in the product. Until you can see quick results from your hard work you’re not going to get sucked in.
If it’s a website or a product, get the server running in less than an hour. Pick a framework and a platform and go—don’t get caught on the details. Setting up websites is secretly easy (python -m SimpleHTTPServer !) but if you’ve never done it you won’t know that. If you need one set up a database right after. Get started quickly. It’s possible with almost every architecture if you just search for cheat sheets and quick-start guides and stuff. You can fix your mistakes later, or start again if something goes wrong.
If you do something tedious, automate it. I have a shell script that copies some Javascript libraries and Html/JS templates into a new Dropbox folder and starts a server running there so I can go from naming my project to having an iterable prototype with some common elements I always reuse in less than five minutes. That gets me off the ground much faster and in less than 50 lines of script.
If you like algorithms or math or whatever, sure, do Project Euler or join TopCoder—those are fun. The competition will inspire some people to be fantastic at coding, which is great. I never got sucked in for some reason, even though I’m really competitive.
If you use open source stuff, sure, take a look at that. I’m only motivated to fix things that I find lacking in tools that I use, which in practice has never lead to my contributing to open source. Rather I find myself making clones of closed software so I can add features to it..
Oh, and start using Git early on. It’s pretty great. Github is neat too and it basically acts as a resume if you go into programming, which is neat. But remember—setting it up is secretly easy, even if you have no idea what you’re doing. Somehow things you don’t understand are off-putting until you look back and realize how simple they were.
Hmm, that’s all that comes to mind for now. Hope it helps.