Thank you!
Gust
No, I study in Brazil. I don’t know how’s the job market and the quality of law schools there in the U.S.… I guess I could tell you what I think about the experience I’m having here, but I suspect it would be wildly different from what you’d have there.
I’m not sure about how I feel about Eliezer’s approach on religion in the sequences. On the one hand, I like using sarcasm; on the other, that doesn’t seem to work for more deeply-rooted beliefs, like religion. I think he should’ve left religion out of his sequences on rationality and criticized it later. The way he did it, it may scare off people who still have a somewhat deep link to religion, before they can learn enough to be able to break free.
On the third hand, I think I may be biased towards avoiding conflicts.
On the specific point illustrated by the story, as expressed in the quote at the beginning of the post, I do agree. I try to induce that same feeling of “shock from how stupid people can be” when I notice a mistake I’ve done, as some kind of mini-”Crisis of Faith”.
Eliezer makes the same point at If Many-Worlds Had Come First.
I think the fact that the Mind Projection Fallacy is a really strong bias in humans significantly decreases the weight of that possibility. Smart people think it may be true because that sounds like the easiest explanation, for a human, not because they actually thought a lot about it from a strictly rational point-of-view.
That’s some kind of general counter-argument against “trust the majority”, I think. When you learn that the majority has some kind of bias that supports its belief, you should decrease the strength you assign to the evidence “the majority thinks it’s true”. P(A|B)/P(A|!B) is small.
Did you have such an experience? Please tell me about it.
I don’t get it =|
Bringing some empirical input from a little different presidential electoral system.
Here in Brazil the system for presidential elections has two turns. All candidates run for the first turn. If someone has >50% votes, that candidate wins. If none has it, the two with most votes run for the second turn, when the most voted wins.
In the last three elections (2002, 2006, 2010), there were 6, 8 and 9 presidential candidates. There was a second turn in all of them, with candidates from the same parties (Worker’s Party and Social-Democratic—the labels don’t mean much, though) at all three. The third-place candidates, though, were from different parties in each election, with 17,9%, 6,85% and 19,3% of the votes. The 2006 election was Lula’s reelection, and the votes for the first and second place in the first turn were the closest of all three years (48,6%/41,6%).
I’m not sure about what the data means and how Eliezer’s line of thought would have to change to apply here. I think it changes the picture a little because even getting to the second turn is seen as some kind of victory. So people can vote in the candidate they really prefer because, hey, maybe they can get to the second turn and have a chance! And the chance that there will be no second turn is small, so the part about “keeping the wrong lizard out” is postponed.
“T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false.”
Actually, I think if “I know T is true” means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 1 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief. So I’d say the problem is a wrong question.
Edited: I meant probability 1 of misleading evidence, not 0.
In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone’s having arrived at the (fallacious) truth of T.
I think I see your point, but if you allow for the possibility that the original deductive reasoning is wrong, i.e. deny logical omniscience, don’t you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
Unless you assume that you can’t make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
And if you do assume that you can’t make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
Actually the current polarization is recent, I think. In 1989 some guy from an obscure party got elected. But brazilian democracy itself is young: we’ve only had 6 direct presidential elections since the demilitarization in 1985, and in the last 5 the second turn, when it happened, was between the Worker’s and Social Democratic parties. So the polarization is recent, but there’s not much data before it.
Sorry, I think I still don’t understand your reasoning.
First, I have the beliefs P1, P2 and P3, then I (in an apparently deductively valid way) reason that [C1] “T is a theorem of P1, P2, and P3”, therefore I believe T.
Either my reasoning that finds out [C1] is valid or invalid. I do think it’s valid, but I am fallible.
Then the Authority asserts F, I add F to the belief pool, and we (in an apparently deductively valid way) reason [C2] “~T is a theorem of F, P1, P2, and P3”, therefore we believe ~T.
Either our reasoning that finds out [C2] is valid or invalid. We do think it’s valid, but we are fallible.
Is it possible to conclude C2 without accepting I made a mistake when reasoning C1 (therefore we were wrong to think that line of reasoning was valid)? Otherwise we would have both T and ~T as theorems of F, P1, P2, and P3, and we should conclude that the promises lead to contradiction and should be revised; we wouldn’t jump from believing T to believing ~T.
But the story doesn’t say the Authority showed a mistake in C1. It says only that she made a (apparently valid) reasoning using F in addition to P1, P2, and P3.
If the argument of the Authority doesn’t show the mistake in C1, how should I decide whether to believe C1 has a mistake, C2 has a mistake, or the promises F, P1, P2, and P3 actually lead to contradiction, with both C1 and C2 being valid?
I think Bayesian reasoning would inevitably enter the game in that last step.
I guess it wasn’t clear, C1 and C2 reffered to the reasonings as well as the conclusions they reached. You say belief is of no importance here, but I don’t see how you can talk about “defeat” if you’re not talking about justified believing.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
I’m not sure if I understood what you said here. You agree with what I said in the first bullet or not?
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Are you sure that’s correct? If there’s a contradiction within the set of axioms, you could find T and ~T following valid deductions, couldn’t you? Proving ~T and proving that the reasoning leading to T was invalid are only equivalent if you assume the axioms are not contradictory. Am I wrong?
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn’t defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he’ll have departed from the sphere of logic completely.
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn’t, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3. I don’t see why logic would force me to accept ~T instead of believing there’s a mistake I can’t see in the proof Ms. Math showed me, or, more plausibly, to conclude that the axioms are contradictory.
2) that she wouldn’t be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Oh, now I see what you mean. I interpreted F as a new promiss, a new axiom, not a whole argument about the (mistaken) reasoning that proved T. For example, (wikipedia tells me that) the axiom of determinacy is inconsistent with the axiom of choice. If I had proved T in ZFC, and Ms. Math asserted the Axiom of Determinacy and proved ~T in ZFC+AD, and I didn’t know beforehand that AD is inconsistent with AC, I would still need to find out what was the problem.
I still think this is more consistent with the text of the original post, but now I understand what you meant by ” I was being charitable with the puzzles”.
Thank you for you attention.
I agree. Actually I think that applies to the whole “Zen speaking” Eliezer often uses.
Very, very good post.
I realized I had this problem, and how much it could cost me, a few months ago.
I was going to write a small column for a newspaper made by students from my faculty. The subject was a delicate one, and the positon I would argue against was strongly popular among one of the colors in the faculty. I already saw the irrationality in the old false dillemas and the groupthink that guided most color politics, and I thought I could use a chance to criticize the “movement” as well as the specific cause.
I wrote a text with a strongly ironical introduction. I showed it to my father, and he said “This is going to get you in trouble with these people in the future. You probably don’t want that.” My first reaction was something like this. But as I thought about it, I realized there was no need for provoking the “greens”. It would accomplish nothing and would possibly gain enemies. I rewrote that first paragraph, and the tone of the text changed completely, without removing or changing a single argument. When it was published, I even received compliments from one of the “greens”.
Since then, I’m careful about what I say, and more so about what I write. I think one thing aggravates the problem you described: when we write something, we can “hear” the specific tone we would use to say that, but the reader can’t. Normal statements may seem ironic, and ironic statements may seem to carry much more criticism than intended.
-- Edited for markdown syntax correction.
Same! Are you still around?
fanfictional works like Dante’s
That made me laugh. Calling Dante “fanfiction” of the Bible was just so unexpected and simultaneously so accurate.
Welcome! And congratulations for creating what’s probably the longest and most interesting introduction thread of all time (I haven’t read all the introductions threads, though).
I’ve read all your posts here. I now have to update my belief about rationality among christians: so long, the most “rational” I’d found turned out to be nothing beyond a repetitive expert in rationalization. Most others are sometimes relatively rational in most aspects of life, but choose to ignore the hard questions about the religion they profess (my own parents fall in this category). You seem to have clear thought, and will to rethink your ideas. I hope you stay around.
On a side note, as others already stated below, I think you misunderstand what Eliezer wants to do with FAI. I agree with what MixedNuts said here, though I would also recommend reading The Hidden Complexity of Wishes, if you haven’t yet. Eliezer is more sane than it seems at first, in my opinion.
PS: How are you feeling about the reception so far?
EDIT: Clarifying: I agree with what MixedNuts said in the third and fourth paragraphs.
I don’t think you can have any kind of “rule” without that inplying some kind of “logic”. And if you have any “rule” that allow “rules” to “interact”, there would be some kind of “set of patterns wich follow from the rules”. Whatever those really mean.
I mean, I don’t see how you could have anything other than absolutely random noise without having some kind of “rule” or “set of rules” that govern whatever there is. Actually, I can’t imagine how would the absence of “rules” be. Is there some kind of “stuff” which the rules “rule”? Or the structure of rules is all that exists? (From what I heard, Tegmark’s Ultimate Ensemble relies on that idea?) If that is the case, does it make sense to ask “what if nothing existed”?
I’m essentially confused about this, but I can’t see a way to throw Platonia away. I do think maybe we live in Platonia, if we are the very structure of rules, instead of some “stuff” the rules “rule”.
I guess I didn’t make much sense.
Hello. My name is Gustavo Bicalho, I’m from Brazil, I’m 20 years old today. I intended to introduce myself here after I finished the sequences (I’m half way through the Fun Theory Sequence) but I thought I should give me this as a birthday gift. Heh.
I have some background in computer programming, having done a technical course of three years during high school. Although I don’t know much of computer science (I know just a little about algorithm analysis and that was self-thaught from wikipedia), I think programming has helped me reshape my way of thinking, made it more structured and precise. I try to improve it however I can, and this is one of the reasons I’m joining LessWrong.
For several reasons, though, I left the computers field (not completely) and I’m now a Law student. I don’t know if you get many of those around here. Anyway, reasoning in this field seems, to me, specially biased. Of course, any reasoning about law involves thinking about ethics and politics, but that isn’t a license for fallacies lack of rigor in arguments. I think this is a problem, and rationality can help me to fight against this.
Also, I’m very interested in moral philosophy, as the foundation of Law. Yudkowsky’s metaethics still isn’t completely clear to me, but I’ve seen some discussion about moral philosophy around here and I guess it’s probably worth reading (I have yet to read lukeprog’s No-Nonsense Metaethics). Specially, if there’s any discussion about justice, or fairness, I would like very much to read.
Besides that, I like to learn almost anything. Physics is interesting, math is very interesting. After reading the first sequences, cognitive science, evolutive psychology and decision thory got into the list, too. If I can learn at least the basics of these fields, I think I’ll be a better thinker and a better person. I think LessWrong is a good starting point for that, too.
I think that’s it.
Oh, if there’s some post/discussion around here about Law already, I would be very glad if someone pointed it out.
See you around!
Gust
PS: Wow, this took me three hours to write o.o Trying to make a good first impression is kinda hard. PPS: Three persons in the same day! Is that usual?