That’s a clever point. Maybe there are two uses for an analogy, then: (1) to reason informally (performing a logical argument using natural, more familiar, language). This could (and I think, should) be done formally whenever possible. and (2) the use as an “intuition pump” (word from Dennett, I think) where the point is to provide a tangible analogy to your model, enabling someone to “understand” you, but not necessarily proving anything.
fburnaby
Call it “expected” truth, analagous to “expected value” in prob and stats. It’s effectively a way to incorporate a risk analysis into your reasoning.
Maybe I’m missing some nuances here, but couldn’t we just say we’re surprised when “we are presented with a new degree of freedom in a (stochastic or deterministic) model we previously had about the world”?
Take the 52 cards example. We expect to see a number and a suit on the card. If we’re then presented with a card showing an unfamiliar number or suit (or any other gibberish, then our model has been falsified). There was one more possible outcome (degree of freedom) that we were previously unaware of.
Here in Canada, we measure “long” in units of time taken to traverse. Mean speed is never given.
So pretty much, this: http://en.wikipedia.org/wiki/Medawar_zone
Think about our evolutionary history. Presumably, life was less stable, deals less predictable than they are today. In that case it would have been better to have a strong hyperbolic discount rate, while now, when outcomes are increasingly reliable, then that rate should be dropped but it (presumably) hasn’t.
Of course, our intuitive discount rate should never reach the exponential that a model would predict, because there are always new unforeseen factors, but I would contend that the uncertainties have dropped substantially. This would make the particular hyperbolic rate that we intuitively discount payoffs at today biased, while in our evolutionary past it presumably would have been a better approximation of a suitable discount rate.
Mathematicians have tried to find ways of dealing with this sort of thing, too: http://en.wikipedia.org/wiki/Fuzzy_set
Do you think this method of modelling would make the problem soluble? Or are there still issues?
Attacking the science is WAY easier than defending ID. We should always make sure to distinguish between the two things when talking to creationists. Most of their “arguments” are exactly of this “have the cake and eat it too” variety (cake=attack evo, eating=defend ID afterwards).
There’s no denying that prestige is better indicator of quality than random chance—the question is—is it the best we can do?
Where does the prestige come from? Likely, it’s got a lot to do with public perception of quality in the first place. If we can improve objectivity in the judgment of this quality, then that’s great; but the prestige would follow it along. We won’t ‘do better’ than following the prestige, the prestige will ‘do better’ at following the quality.
Sorry for the non-answer here, but I take a different approach: I work when I feel like it.
I’ll try getting stared on the actual work, whether I want to or not (you have to overcome that initial mental inertia). Then once I’m about half an hour in to the work, if I find myself watching the clock or thinking about how I should go check my RSS feeds, I’ll stop and switch to my list of more mundane tasks.
Just summarize the first half-hour of your work, and if it’s crap, move on and come back to it later. If it’s crap, you broke even, and if it’s not you’ve saved yourself a half-hour wasted on Quake.
As a lowly grad student, I’m frequently congratulated by professors for being clever. But, I’m mostly just explaining things that they already know. I think they’re just impressed because they have no expectations for me, and I only speak up when I know what’s going on (which is rarely).
On the flip side, they spend an hour lecturing for every ten minutes of conversation I have with them. As a result, they have far more opportunity to reveal some misconception or minor incompetence. We’re unlikely to share the same misconceptions about the world, so I’ll probably be able to call them on this error. I think the difference is that they—with their high status—are the ones who have to do most of the talking. Even someone who is a leading expert in their field is going to get called out by some know-nothing if they talk long enough.
So pretty much: I think you’re probably right—that our higher expectations will bias us away from excusing and forgetting a mistake made by someone who is of high status. I just want to add that we also give them more opportunities to reveal their shortcomings because they have high status.
What about the possibility that free will is either a fundamental/essential property within the universe (like an elan vital for free will), or an emergent property of certain complex systems? In either of these two cases, reductionism would still be true, it would just leave most current reductionists wrong about free will.
The essentialist theory? I agree. I’m simply being as generous as conceivable about empirical details in my still-winning argument.
I think this hierarchy can be derived from the way that I’ve developed for thinking about this problem—considering the person’s beliefs as a “memeplex” (“memotype”?). Replacing a few memes within a creationst’s head—even if the new memes are better—can significantly increase the net cognitive dissonance going on within their own skull and prompt them to reject facts as something that must have been tampered with, or therwise being somehow invalid, protecting their more self-consistent, incorrect model.
Once the memeplex reaches a stable local minimum region in its dissonance landscape (analagous to fitness landscape), true information can seem worse. A well integrated memplex would be “truly part of you”.
EDIT: I realize this analogy is at risk of noticing “surface analogies” between genetics and memetics, which I’ve just been warned against in the article. I don’t think this is the case, but I’ll leave the caveat that my understanding of this idea may be as low as level 2.
Thanks for the reading. I’m still playing some catch-up with the community.
Elezier’s issue with the word “emergence” seems to stem from the fact that people treat the statement “X emerges from process Y” as some sort of explanation. I completely agree with him that it’s actually a nice-sounding non-explanation. I’m in no way claiming or trying to imply that by my above statement that because consciousness is an emergent property that I’ve explained something. However, being an emergent property is the only alternative to being an essential property. I think my statement above does a good job of spanning the space of possible explanations, which was its purpose.
Please do correct me again if I’m wrong on that, though.
I think we completely agree about all of this, then. I’m just letting the terminological confusion that was introduced by Ian C’s comment muddy my own attempts at articulating myself.
I guess the point I was trying to make was that, regardless of the result—free will exists, or free will doesn’t exist—there’s no reason to think that this result would have anything to do with the question of whether reductionism is a good research programme. We would still attempt to reduce theories as much as possible, even if free will was “magic”.
The part about “what current reductionists believe”—I assumed that most reductionists think of free-will as nonexistent, or an illusion. So the hypothetical case where free will does exist (magical or otherwise) would leave them hypotheically wrong about it.
Myself, I’m a fan of Dennett’s stance—might as well call the thing we have “free will” even though there’s nothing magic about it. Sorry for the long string of muddled comments. I’ll try thinking harder the first time around next time.
My girlfriend have been casually collecting data on this over the past 2 years. We occasionally end up in social situations with people who are flaky enough to take this stuff seriously. They’ll usually—after about 15 minutes of conversation—make note of our ‘auras’ or personalities, and then guess a sign for each of us. We encourage them to try. Of the eight guesses we’ve had about our signs so far, none have been correct within 2 tries. I hope a few more years of this (and maybe some more data from less flaky friends) will offer enough data points to see if there’s any bias, or if the odds of a good initial guess are uniform.
An extremely minor quibble with your site (I’m just trying to be helpful):
You ask for an opinion using a question: “Does god exist?”, but then you allow people to provide answers as if they’re agreeing or disagreeing with an affirmative statement (options: agree, neutral, disagree). The grammatical disconnect caused me some confusion when I first looked at the site. I think changing this to be either:
(1) “God exists” [agree, neutral disagree]
(2) “Does god exist?” [yes, maybe, no]
would make this more clear.
I haven’t gotten to really read the above article yet, but do you think the proposed method would perform substantially better than a simple cluster analysis?
I had vaguely what Jonicholas mentions in mind.
Get him out, please.