One fundamental difference between LW and most cults is that LW tells you to question everything, even itself.
MTGandP
So there comes a point in Buddhism where you’re not supposed to be skeptical anymore. And Objectivists aren’t supposed to question Ayn Rand.
You can do a thousand times better (very conservatively) if you expand your domain of consideration beyond homo sapiens.
Here is the full Feynman quote that was used above:
Poets say science takes away from the beauty of the stars—mere globs of gas atoms. Nothing is ‘mere’. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination—stuck on this carousel my little eye can catch one-million-year-old light. A vast pattern—of which I am a part… What is the pattern or the meaning or the why? It does not do harm to the mystery to know a little more about it. For far more marvelous is the truth than any artists of the past imagined it. Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were a man, but if he is an immense spinning sphere of methane and ammonia must be silent?
Given my current mental capacities, I think that any “proof” of God would be more easily attributed to hallucination. However, it should still be possible for God to prove His existence. If He is omnipotent, then he can increase my mental capacity to the extent that I can distinguish between divine intervention and a hallucination of divine intervention.
Elizer raises the issue of testing a rationality school. I can think of a simple way to at least approach this: test the students for well-understood cognitive biases. We have tests for plenty of biases; some of the tests don’t work if you know about them, which surely these students will, but some do, and we can devise new tests.
For example, you can do the classic test of confirmation bias where you give someone solid evidence both for and against a political position and see if they become more or less certain. Even people who know about this experiment should often still fall prey to it—if they don’t, they have demonstrated their ability to escape confirmation bias.
I will remark, in some horror and exasperation with the modern educational system, that I do not recall any math-book of my youth ever once explaining that the reason why you are always allowed to add 1 to both sides of an equation is that it is a kind of step which always produces true equations from true equations.
I had a similar experience. When I took Algebra I, I understood that you could add, subtract, multiply, and divide (non-zero) some value to both sides of the equation and it would still be true. I remember at one point I was working on solving an equation that required taking the square root of both sides, and I wondered, “Are you allowed to do that?”
It wasn’t until years later that I figured it out: you’re allowed to perform any function on both sides of the equation, because both sides represent the same value. For any function, if a = b, f(a) = f(b).
In this context, you’re not allowed to divide by 0 because division by 0 is not a function. Multiplication by zero is a function, but every real input maps to an output of 0; the function is not a one-to-one function, and thus not invertible.
If you’re trying to find the value of x and there’s only one x in the equation, it’s simply a matter of inverting every function in the equation from the outside in. It’s harder if you have multiple x’s because you have to try to combine them in some reasonable way—and sometimes you actually want to separate them, e.g. “x^2 − 9 = 0” into “(x − 3)(x + 3) = 0″.
Solving this problem appears equivalent to writing a computer program to solve algebraic equations.
[R]ationalist opinion leaders are better able to . . . give up faster when things don’t work.
Why is this a good thing? It seems to me that people give up too easily just as much as—if not more than—the opposite, especially when they’re trying something that they don’t expect to work. You have to stick with it long enough to collect a reasonable amount of data.
The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.
This is true. Wouldn’t it be beneficial, though, for any particular community to focus on upholding rationalist principles? If the LW community is specifically committed to rationality, other communities should be committed to rationality as a side effect—as an optimization heuristic.
The effective altruism community, for example, already does a pretty good job of this. Effective altruists tend to be aware of the sorts of biases that get in the way of effective giving. On the other hand, most charities and charity-based communities don’t have this focus on rationality. The breast cancer movement, for instance, does not give the same attention to rationality as the effective altruism movement.
Of course, if the breast cancer movement did give attention to rationality, it probably wouldn’t be the breast cancer movement anymore—if would be the effective altruism movement. If you’re looking for the optimal method for preventing breast cancer, why not generalize that and just look for the optimal method for helping people (which is almost certainly not breast cancer research)?
- 1 Nov 2012 22:53 UTC; 0 points) 's comment on Firewalling the Optimal from the Rational by (
I think the point is that if something happens, it has probability 1 of having happened, so it doesn’t make sense to call it “unlikely.” A perfect model could have predicted it with probability 1. If you failed to predict it, it’s because your model was imperfect.
I think, however, that plenty of reasonable models of group interactions given our current knowledge would have failed to predict the rise of Objectivism.
True. I didn’t understand how the anecdote related to the article, although Daniel Burfoot’s comment helped to clarify.
I don’t generally listen to dubstep so I can’t pull anything from memory, but I found this one by browsing. I think it’s pretty good.
“has the same meaning as”
“most of the time when you want to use the former, you should use the latter instead”
I think, in general, these statements amount to the same thing. “It’s rational to __” generally means the same thing as the “deflated” statement; the key difference is its use of the word “rational” or “rationality” that ends up weakening the term.
I posted a comment with a similar sentiment. I think it’s not necessarily important to explicitly include non-rationalists in communities (although I’m not sure that’s what you’re saying, so forgive me if I misinterpreted you). But I do think it’s a good idea to promote rationalist leanings in groups that don’t necessarily identify as rationalist.
In fact, that’s how I discovered LW. I participate in the utilitarianism community, and a large proportion of utilitarians (on the internet, at least) also identify as rationalist. I started reading LW as an indirect result of my reading about utilitarianism. Utilitarians certainly seem to perform better as rationalists, and other communities should, too.
Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney’s flip-flopping, and Republicans reacted similarly to Kerry’s waffling in 2004. But for some reason, supporters of the candidate in question either don’t notice or don’t care.
What if 2 + 2 varies over something other than time that nonetheless correlates with time in our universe? Suppose 2 + 2 comes out to 4 the first 1 trillion times the operation is performed by humans, and to 5 on the 1 trillion and first time.
I suppose you could raise the same explanation: the definition of 2 + 2 makes no reference to how many times it has been applied. I believe the same can be said for any other reason you may give for why 2 + 2 might cease to equal 4.
My view is that if your philosophy is not unsettled daily then you are blind to all the universe has to offer.
Neil deGrasse Tyson
You can’t distinguish your group by doing things that are rational and believing things that are true. If you want to set yourself apart from other people you have to do things that are arbitrary and believe things that are false.
That’s certainly true. I think the point isn’t that you should be constantly changing everything you believe, but that you should actively seek out new knowledge—especially knowledge that has a high probability of shifting the way you think (in a positive direction, of course).
Why do Objectivists so frequently believe that anthropogenic global warming is not real? (It appears to be the consensus opinion on the Objectivism forum.) This belief doesn’t seem to have anything to do with Objectivism, and Ayn Rand certainly never mentioned global warming.