If my tenants paid rent with a piece of paper that said “moneeez” on it, I wouldn’t call it paying rent.
In your view, don’t all beliefs pay rent in some anticipated experience, no matter how bad that rent is?
If my tenants paid rent with a piece of paper that said “moneeez” on it, I wouldn’t call it paying rent.
In your view, don’t all beliefs pay rent in some anticipated experience, no matter how bad that rent is?
“Smart and beautiful” Joe is being Pascal’s-mugged by his own beliefs. His anticipated experiences lead to exorbitantly high utility. When failure costs (relatively) little, it subtracts little utility by comparison.
I suppose you could use the same argument for the lottery-playing Joe. And you would realize that people like Joe, on average, are worse off. You wouldn’t want to be Joe. But once you are Joe, his irrationality looks different from the inside.
But why do beliefs need to pay rent in anticipated experiences? Why can’t they pay rent in utility?
If some average Joe believes he’s smart and beautiful, and that gives him utility, is that necessarily a bad thing? Joe approaches a girl in a bar, dips his sweaty fingers in her iced drink, cracks a piece of ice in his teeth, pulls it out of his mouth, shoves it in her face for demonstration, and says, “Now that I’d broken the ice—”
She thinks: “What a butt-ugly idiot!” and gets the hell away from him.
Joe goes on happily believing that he’s smart and beautiful.
For myself, the answer is obvious: my beliefs are means to an end, not ends in themselves. They’re utility producers only insofar as they help me accomplish utility-producing operations. If I were to buy stock believing that its price would go up, I better hope my belief paid its rent in correct anticipation, or else it goes out the door.
But for Joe? If he has utility-pumping beliefs, then why not? It’s not like he would get any smarter or prettier by figuring out he’s been a butt-ugly idiot this whole time.
More generally you cannot rigorously prove that for all integers n > 0, P(n) → P(n+1) if it is not true, and in particular if P(1) does not imply P(2).
Sorry, I can’t figure out what you mean here. Of course you can’t rigorously prove something that’s not true.
I have a feeling that our conversation boils down to the following:
Me: There exists a case where induction fails at n=2.
You: For all cases, if induction doesn’t fail at n=2, doesn’t mean induction doesn’t fail. Conversely, if induction fails, it doesn’t mean it fails at n=2. You have to carefully look at why and where it fails instead of defaulting to “it works at n=2, therefore it works.”
Is that correct, or am I misinterpreting?
Anyways, let’s suppose you’re making a valid point. Do you think that my interlocutors were arguing this very point? Or do you think they were arguing to put me back in my place, like TheOtherDave suggests, or that there was a similar human issue that had nothing to do with the actual argument?
“I refuse to cede you the role of instructor by letting you define the hypothetical.”
You know, come think of it, that’s actually a very good description of the second person… who is, by the way, my dad.
I am a lot more successful if I adopt the stance of “I am thinking about a problem that interests me,” and if they express interest, explaining the problem as something I am presenting to myself, rather than to them. Or, if they don’t, talking about something else.
This hasn’t ever occurred to me, but I’ll try it the next time a similar situation arises.
But why can you take a horse from the overlap? You can if the overlap is non-empty. Is the overlap non-empty? It has n-1 horses, so it is non-empty if n-1 > 0. Is n-1 > 0? It is if n > 1. Is n > 1? No, we want the proof to cover the case where n=1.
That’s exactly what I was trying to get them to understand.
Do you think that they couldn’t, and that’s why they started arguing with me on irrelevant grounds?
.… The first n horses and the second n horses have an overlap of n-1 horses that are all the same color. So first and the last horse have to be the same color. Sorry, I thought that was obvious.
I see your point, though. This time, I was trying to reduce the word count because the audience is clearly intelligent enough to make that leap of logic. I can say the same for both of my “opponents” described above, because both of them are well above average intellectually. I honestly don’t remember if I took that extra step in real life. If I haven’t, do you think that was the issue both people had with my proof?
I have a feeling that the second person’s problem with it was not from nitpicking on the details, though. I feel like something else made him angry.
I suspect that I lost the second person way before horses even became an issue. When he started picking on my words, “horses” and “different world” and “hypothetical person” didn’t really matter anymore. He was just angry. What he was saying didn’t make sense from that point on. For whatever reason, he stopped responding to logic.
But I don’t know what I said to make him this angry in the first place.
I don’t think I ever got to my “ultimate” conclusion (that all of the operations that appear in step n must appear in the basis step).
I was trying to use this example where the proof failed at n=2 to show that it’s possible in principle for a (specific other) proof to fail at n=2. Higher-order basis steps would be necessary only if there were even more operations.
Induction based on n=1 works sometimes, but not always. That was my point.
The problem with the horses of one color problem is that you are using sloppy verbal reasoning that hides an unjustified assumption that n > 1.
I’m not sure what you mean. I thought I stated it each time I was assuming n=1 and n=2.
Most of the comments in this discussion focused on topics that are emotionally significant for your “opponent.” But here’s something that happened to me twice.
I was trying to explain to two intelligent people (separately) that mathematical induction should start with the second step, not the first. In my particular case, a homework assignment had us do induction on the rows of a lower triangular matrix as it was being multiplied by various vectors; the first row only had multiplication, the second row both multiplication and addition. I figured it was safer to start with a more representative row.
When a classmate disagreed with me, I found this example on Wikipedia. His counter-arguement was that this wasn’t the case of induction failing at n=2. He argued that the hypothesis was worded incorrectly, akin to the proof that a cat has nine tails. I voiced my agreement with him, that “one horse of one color” is only semantically similar to “two horses of one color,” but are in fact as different as “No cat (1)” and “no cat (2).” I tried to get him to come to this conclusion on his own. Midway through, he caught me and said that I was misinterpreting what he was saying.
The second person is not a mathematician, but he understands the principles of mathematical induction (as I’d made sure before telling him about horses). And this led to one of the most frustrating arguments I’d ever had in my life. Here’s the our approximate abridged dialogue (sans the colorful language):
Me: One horse is of one color. Suppose every n horses are of one color. Add the n+1st horse, and take n out of those horses. They’re all of one color by assumption. Remove 1 horse and take the one that’s been left out. You again have n horses, so they must be of one color. Therefore, all horses are of one color.
Him: This proof can’t be right because its result is wrong.
Me: But then, suppose we do the same proof, but starting with on n=2 horses. This proof would be correct.
Him: No, it won’t be, because the result is still wrong. Horses have different colors.
Me: Fine, then. Suppose this is happening in a different world. For all you know, all horses there can be of one color.
Him: There’re no horses in a different world. This is pointless. (by this time, he was starting to get angry).
Me: Okay! It’s on someone’s ranch! In this world! If you go look at this person’s horses, every two you can possibly pick are of the same color. Therefore, all of his horses are of the same color.
Him: I don’t know anyone whose horses are of the same color. So they’re not all of one color, and your proof is wrong.
Me: It’s a hypothetical person. Do you agree, for this hypothetical person—
Him: No, I don’t agree because this is a hypothetical person, etc, etc. What kind of stupid problems do you do in math, anyway?
Me: (having difficulties inserting words).
Him: Since the result is wrong, the proof is wrong. Period. Stop wasting my time with this pointless stuff. This is stupid and pointless, etc, etc. Whoever teaches you this stuff should be fired.
Me: (still having difficulties inserting words) … Wikipe—…
Him: And Wikipedia is wrong all the time, and it’s created by regular idiots who have too much time on their hands and don’t actually know jack, etc, etc. Besides, one horse can have more than one color. Therefore, all math is stupid. QED.
THE END.
To the best of my knowledge, neither of these two people were emotionally involved with mathematical induction. Both of them were positively disposed at the beginning of the argument. Both of them are intelligent and curious. What on Earth went wrong here?
^One of the reasons why I shouldn’t start arguments about theism, if I can’t even convince people of this mathematical technicality.
So what you’re basically saying is that EDT is vulnerable to Simpson’s Paradox?
But then, aren’t all conclusions drawn from incomplete sets of data potentially at risk from unobserved causations? And complete sets of data are ridiculously hard (if not impossible) to obtain anyway.
I’m sure that you’re absolutely technically correct when saying what you’d said, but I had to reread it 5 times just to figure out what you meant, and I’m still not sure.
Are you saying that the strategy to indiscriminately like whatever’s popular will lead to worse outcomes because of random effects, as in this experiment that showed that popularity is largely random? Then you’re right—because what are the chances that your preferences exactly match the popular choice?
On the other hand, if it so happens that you end up liking something that’s popular and you couldn’t tell it apart from something similar in a blind test, is it in any way bad that you’re getting utility out of it?
“I wish that the genie could understand a programming language.”
Then I could program it unambiguously. I obviously wouldn’t be able to program my mother out of the burning building on the spot, but at least there would be a host of other wishes I could make that the genie won’t be able to screw up.
I think alexflint’s point is something along the lines of “it’s okay to like popular things just because they’re popular.”
Thanks for bringing this up. Now that you’ve said it, I think I’d observed something similar about myself. Like you, I find it far easier to solve internal problems than external. In SCUBA class, I could sketch the inner mechanism of the 2nd stage, but I’d be the last to put my equipment together by the side of the pool.
Your description maps really well onto introversion and extroversion. I searched for psychology articles on extraversion, introversion and learning styles. A lot of research has been done in that area. For example:
Through the use of EPQ vs. LSQ and CSI questionnaires (see FOOTNOTE below), Furnham (1992) found that extraverts are far more active and far less reflective in their learning. They don’t need to chew over the information before they act on it.
Jackson and Lawty-Jones (1996) confirmed those findings with a similar study but fewer questionnaires (only EPQ vs. LSQ).
Zhang (2001) administered more questionnaires (TSI vs. SVSDS) to find that, unsurprisingly, having a social personality makes you more likely to want to employ external thinking style—that is, interact with others.
More studies used more questionnaires to find the same—i.e. Furnham, 1996; Furnham, Jackson, and Miller 1999; and many others, I’m sure.
The above seem to answer Swimmer963’s question: extraverted people are the ones who are better at actively applying the knowledge that they have quickly and on the spot in collaborative situations. Introverted people need time to reflect. Caveat: this conclusion is based on questionnaire studies, where people described their behavior instead of demonstrating it.
Unfortunately, I couldn’t find a single good experiment that addressed this question directly. But I did find this one…
Suda and Fouts (1980) set up an experiment where a sixth-grader would be left to believe that a little girl in the room next door had fallen off a chair. The sixth-grader then faced a choice: go into the girl’s room (active help) or go into the experimenter’s room (passive help) or continue with the “apparent” experiment (something about children’s drawings of people). If the sixth-grader tried to help, the experimenter would return. The peer was a confederate instructed not to initiate interactions or helping behavior.
I wish they’d included a table of their results in the article. Here’s what I managed to glean from the blurb: Overall, more extraverts helped. Extraverts tended to help actively, by going to the girl’s room themselves. Only a couple introverts tried to help actively; most of those who chose to help at all have done so passively.
During the interviews afterwards, half of the introverted kids said that they didn’t actively help because it might’ve been “wrong” to stop drawing.
What conclusions / interpretations can we draw from this experiment, aside the obvious? Introverted kids might not have been as good at reacting to the world around them as extraverted kids. This might be the very same dynamic that leads to introverted adults being unable to do as well in the real world “people situation” of the Quebec test as well as they could do on the written Ontario test.
FOOTNOTE:
A popular classification of personality traits in the articles I’ve read was due to the Eysenck Personality Questionnaire (EPQ). Personality is measured across three dimensions: Extraversion vs. Introversion, Neuroticism vs. Stability and Psychoticism vs. Socialisation Wikipedia article.
Honey and Mumford’s (1982) Learning Style Questionnaire (LSQ) identifies four learning styles: Activists jump into the problem at hand. “They revel in short-term crisis fire fighting,” as Furnham puts it. Reflectors are careful and methodical; they prefer to stand back and analyze everything carefully before they act. Theorists tend to synthesize the facts they observe into coherent theories. And Pragmatists want what they learn to be practical and applicable, preferably immediately.
Whetten and Cameron’s (1984) Cognitive Style Instrument (CSI) considers the learning styles form a slightly different angle than LSQ, by analyzing how people: gather information (perceptive vs. receptive), evaluate information (systematic vs. initiative) and respond to information (active vs. reflective). The last parameter is the most interesting in this case. It describes whether people act on the information quickly (active) or prefer to reflect on it before taking action (reflective).
Sternberg and Wagner (1992) Thinking Styles Inventory (TSI) asks 65 questions to classify people into one of 13 learning styles. Two of them are external and internal; people who think externally are eager to use their knowledge to interact with people, and those who think internally prefer to work independently.
Short-Version Self-Directed Search (SVSDS) assesses personality types across 6 scales, one of which is social.
REFERENCES:
(1) Furnham A. Personality and Learning Style—a Study of 3 Instruments. Personality and Individual Differences 1992 APR;13(4):429-438.
(2) Jackson C, LawtyJones M. Explaining the overlap between personality and learning style. Personality and Individual Differences 1996 MAR;20(3):293-300.
(3) Zhang LF. Thinking styles and personality types revisited. Personality and Individual Differences 2001 OCT 15;31(6):883-894.
(4) Furnham A. The FIRO-B, the learning style questionnaire, and the five-factor model. Journal of Social Behavior and Personality 1996 JUN;11(2):285-299.
(5) Furnham A, Jackson CJ, Miller T. Personality, learning style and work performance. Personality and Individual Differences 1999 DEC;27(6):1113-1122.
(6) Suda W, Fouts G. Effects of Peer Presence on Helping in Introverted and Extroverted Children. Child Dev 1980;51(4):1272-1275.
But, given that we have a grand total of one data point, I can’t narrow it down to a single answer.
Exactly!
Given just one data point, every explanation for why we didn’t observe water boiling at 100 degrees C is an excuse for why it should have. To honestly answer this question, we would have to have performed additional experiments.
But we had already had a conclusion we were supposed to have reached—a truth by definition, in our case. Reaching that conclusion in our imperfect circumstances required rationalization.
Well in that case Earth doesn’t really go around the sun, it just goes around the center of this galaxy on this weird wiggly orbit and the sun happens to always be in a certain position with respect to...… ouch! See what I did? I babbled myself into ineptness by trying to be “absolutely technically correct.” I just can’t. Even if I finished that “absolutely technically correct” sentence, I’d probably be wrong in some other way I haven’t even imagined yet.
So let’s accept the fact that not everything that is said which is true is “absolutely technically correct.” (True with respect to The Simple Truth, ugh, this semantics is tiring so I’ll quit).
The not-technically-correct truth for Hunga Huntergatherer and the not-technically-correct truth for Amara Astronomer seem to verbally contradict each other in the same way that Albert::sound verbally contradicts Barry::sound. Is the solution to it that one is false and other is true? You take the side of Amara Astronomer (and so do I) because the maps in our heads resemble this view better than the other.
The fact that these two notions seem contradictory is not because they are contradictory, but because our minds are trying to map them both into the same spot.
Your solution brings us back to analyzing maps. Its analogue is defining Albert::sound to be correct. I don’t believe that the point of the article was to define truth. It’s practically impossible to do so (see my fumble above). I think the point of the article was that contradictions in our ill-defined language (and concepts and maps that come with it) do not imply contradictions in reality.
You’re right, of course.
I’d written the above before I read this defense of researchers, before I knew to watch myself when I’m defending research subjects. Maybe I was far too in shock to actually believe that people would honestly think that.
I don’t know too many theist janitors, either. Doesn’t mean they don’t exist.
From my perspective, it sucks to be them. But once you’re them, all you can do is minimize your misery by finding some local utility maximum and staying there.