When dealing with the possibility of ideology influencing results one needs to be careful that one isn’t engaging in projection based on one’s own ideology influencing results. Otherwise this can turn into a fully general counter-argument. (To use one of the possibly more amusing examples, look at Conservapedia’s labeling of the complex numbers and the axiom of choice as products of liberal ideology.)
Also, an incidental note about the issue of climate change: we should expect that most aspects of climate change will be bad. Humans have developed an extremely sensitive system over the last few hundred years. We’ve settled far more territory (especially on the coasts) and have far more complicated interacting agriculture. Changing the environment in any way is a change from the status quo. Changing the status quo in any large way will be economically disruptive. Note however that there are a handful of positives to an increase in average global temperature that are clearly acknowledged in the literature. Two examples are the creation of a north-west passage, and the opening of cold areas of Russia to more productive agriculture (or in some cases, any agriculture as the permafrost melts).
(To use one of the possibly more amusing examples, look at Conservapedia’s labeling of the complex numbers and the axiom of choice as products of liberal ideology.)
It looks like my memory was slightly off. The main focus is apparently on the project’s founders belief that “liberals” don’t like elementary proofs. See this discussion. I’m a bit busy right now but I’ll see if I can dig up his comments about the Axiom of Choice.
I checked that page. I don’t see any statement that “liberals” don’t like elementary proofs.
In this discussion, Andy Schlafly, to whom you are apparently referring since he appears to have control over content, is arguing with Mark Gall over the best definition of “elementary proof”. Essentially Mark believes that the definition should reflect what he believes to be common usage, and Andy believes that the definition should reflect a combination of usage and logic, ruling out certain usage as mis-usage. I think Andy is essentially identifying what he believes to be a natural kind, and believes his definition to cut nature at the joints.
Andy uses the word “liberal” in only one place, here:
Academic mathematicians, as in other academic fields, are in denial about many things: gender differences, religious truth, the value of self-criticism, and the bankruptcy of liberal politics.
“Liberal politics” here is given only as an example of error, one example among several, another example being atheism. The statement is not that liberals don’t like elementary proofs any more than that atheists don’t like elementary proofs. In fact I found no statement that anybody doesn’t like elementary proofs. Rather, the discussion appears to be about the best definition of elementary proofs, not about liking or disliking.
Also, the “talk” pages of Conservapedia, like the “talk” pages of Wikipedia, are not part of the encyclopedia proper. I think it’s incorrect, then, to say that the Conservapedia does something, when in fact it is done in the talk pages.
Ok. If you prefer, Andrew is even more blunt about his meaning here
where he says:
The concept of an elementary proof is well-known in mathematics and was widely taught to top mathematics students at least until 25 years ago. Yet Wikipedia refused for months to have an entry about it, and only relented when I pointed out here that MathWorld does have an entry.
Why such resistance? Because many of the recent claims of proofs, such as Wiles’ proof of Fermat’s Last Theorem, are not elementary proofs and liberals don’t want to admit that. Liberals prefer instead to claim that mathematicians today are smarter than the devoutly Christian mathematicians like Bernhard Riemann and Carl Gauss. Not so, and this omission of this entry on Wikipedia was due to liberal bias.
Explained another way, liberals detest accountability, in this case the accountability of the rigorous criteria for an elementary proof. Godspeed
(End quote from Andrew).
That example seems to be pretty explicit. I agree that in general what happens on a talk page is not the same thing as what happens in the encyclopedia proper but Andrew includes this claim as one of his examples of bias in Wikipedia which is in their main space (although that doesn’t explicitly call it an example of “liberal” bias).
Okay, that’s close to what you were saying, though this seems to be a speculative hypothesis he came up with to explain the striking fact that Wikipedia did not include the entry. The important topic is the omission from Wikipedia. The explanation—that’s his attempt to understand why it happened. Many people are apt to come up with obviously highly speculative speculations when trying to explain surprising events. I don’t think all that much should be made of such things. In any case, I’m not convinced that he’s wrong. (I’m not convinced that he’s right either.)
It isn’t that surprising that we’d have that sort of thing missing. A lot of the articles I’ve written for Wikipedia are ones I only wrote because I was trying to look them up and was surprised that we didn’t have them. People don’t appreciate how many gaps Wikipedia still has. For example, until I wrote it, there was no Wikipedia article for Samuel Molyneux, who was a major historical astronomer.
In any case, I’m not convinced that he’s wrong. (I’m not convinced that he’s right either.)
Beware false compromise. The truth does not always lie in the middle. (Incidentally, are you a Bayesian? If so, around what probability do you define as being “convinced”?)
To my mind, being convinced of a claim is essentially being ready to take some action which assumes the claim is true. I think that’s the relevant threshold, I think that’s essentially how the term is used in ordinary speech. Anyway, that’s how I think I should use it.
That being the case, then whether to be convinced or not depends on costs and benefits, downsides and upsides. For example, if the upside is $1 and the downside is $100, then I will not be convinced enough to take a risky action unless I assign its success (and, therefore, the truth of statements on which its success depends) a probability greater than about 99%. But if the upside and downside are both $1 then I will readily take action even if I assign the probability slightly over 50%. (By this logic, Pascal can be convinced of God’s existence even if the probability he assigns to it is much less than 50% - which admittedly seems to represent a breakdown in my understanding of “convinced”, but I still think it works above 50%)
In the current case there are essentially no practical consequences from being right or wrong. What I find, though, is that when you take away practical consequences, most people interpret this as a license to have a great deal of confidence in all sorts of conflicting (and therefore at least half wrong) beliefs. This makes sense rationally, if we assume that the costs of having false beliefs are low and the benefits of having true beliefs are high, and in fact there’s even a stronger case for being carelessly overconfident, which is that even false beliefs, confidently asserted, can be beneficial. The benefit in question is largely a social benefit—tribal affiliation, for example.
So then, one might think, I should have little problem becoming convinced by the first claim about academic mathematicians that comes along, seeing as there is so little downside from indulging in delusion. But this does not mean that there is no downside. I think that a certain amount of harm is done to a person who has false beliefs, and whether that harm outweighs the benefit depends on what that person is doing with himself.
In any case I think that when it comes to beliefs that have important practical consequences, the harm of delusion is typically much greater than not knowing—provided one realizes that one does not know. So in practical matters it is usually better to admit ignorance than to delusionally become convinced of a randomly selected belief. For this reason, I think that in practical matters one should usually place the threshold rather high before committing oneself to some belief. So the real, everyday world typically offers us the inverse of Pascal’s wager: the price of commitment to a false belief is high, and the price of admitting one does not know (agnosticism) is (relatively) low.
If I think that I have a 10% chance of being shot today, and I wear a bulletproof vest in response, that is not the same as being convinced that I will be shot.
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
Edit:
Actually, wait a sec.
By this logic, Pascal can be convinced of God’s existence even if the probability he assigns to it is much less than 50% - which admittedly seems to represent a breakdown in my understanding of “convinced”, but I still think it works above 50%.
Just follow through on the fact that you noticed this.
You have only pointed out an incompleteness in my account that I already pointed out. I pointed out that below 50%, the account I gave of being convinced no longer seems to hold.
The perfect is the enemy of the good. That an account does not cover all cases does not mean the account is not on the right track. A strong attack on the account would be to offer a better account. JoshuaZ already offered an alternative account by implication, which (as I understand it) that belief is simply a constant cutoff, for example, a probability assignment above 80% is belief, or maybe 50%, or maybe 90%.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
To restate, you did something that would have been successful had those keys been your keys. To restate, you acted in a way that would have been successful had your belief been true.
And I think this is generally a principle by which we explain our actions, particularly our mistaken actions. The explanation is that we acted in a way that would have worked out had our beliefs been correct. And so, your actions reveal your beliefs. By taking your wife’s car keys, you reveal your belief that they are your car keys.
So your actions reveal your beliefs. But here’s the problem: your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits. That’s why you are more ready to take risky action when the downside is low and the upside is high, and less ready to take risky action when the downside is high and the upside is low. So your actions are a product of a combination of probability assignments and value assignments.
But your actions meanwhile are in accordance with your beliefs.
Conclusion follows: your beliefs are a product of a combination of probability assignments and value assignments.
Now, as I said, this picture is incomplete. But it seems to hold within certain limits.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
A utility maximizing Bayesian doesn’t say “oh, this has the highest probability so I’ll act like that’s true.” A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?” To use an example that might help, suppose A declares that they are going to toss two standard six-sided fair die and take the sum of the two values. If anyone guesses the correct result then A will pay the guesser $10. I assign a low probability to the result being “7” but that’s still my best guess. And one can construct other situations (if for example the payoff was $1000 if one correctly guessed and the guess happened to be an even number then guessing 6 or guessing 8 makes the most sense). Does that help?
A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?”
That matches my own description of what the brain does. I wrote briefly:
your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits.
which I explain elsewhere in more detail, and which matches your description of the utility maximizing Bayesian. It is the combination of your probability assignments and your value assignments which produces your expected return for each course of action you might take.
Does that help?
Depends what you mean. You are agreeing with my account, with the exception that you are saying that this describes a “utility maximizing Bayesian”, and I am saying that it describes any brain (more or less). That is, I think that brains work more or less in accordance with Bayesian principles, at least in certain areas. I can’t think that the brain’s calculation is tremendously precise, but I expect that it good enough for survival.
Here’s a simple idea: everything we do is an action. To speak is to do something. Therefore speech is an action. Speech is declaration of belief. So declaration of belief is an action.
Now, let us consider what CuSithBell says:
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
The utility of a declaration depends on its context, on how the declaration is used. And declarations are used. We make assertions, draw inferences, and consequently, act. So our actions depend on our statements. So our statements must be adjusted to the actions that depend on them. If someone is considering a highly risky undertaking, then we will avoid making assertions of belief unless our probability assignments are very high.
Maybe people have noticed this. People adjusting their statements, even retracting certain assertions of belief, once they discover that those statements are going to be put to a more risky use than they had thought. Maybe they have noticed it and believed it to be an inconsistency? No—it’s not an inconsistency. It’s a natural consequence of the process by which we decide where the threshold is. Here’s a bit of dialog:
Bob: There are no such thing as ghosts.
Max: Let’s stay in this haunted house overnight.
Bob: Forget it!
Max: Why not?
Bob: Ghosts!
For one purpose (which involves no personal downside), Bob declares a disbelief in ghosts. For another purpose (which involves a significant personal downside if he’s wrong), Bob revises his statement. Here’s another one:
Bob: Bullets please. My revolver is empty.
Max: How do you know?
Bob: How do you think I know?
Max: Point it at your head and pull the trigger.
Bob: No!
Max: Why not?
Bob: Why do you think?
For one purpose (getting bullets), the downside is small, so Bob has no trouble saying that he knows his revolver is empty. For the other purpose, the downside is enormous, so Bob does not say that he knows it’s empty.
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
Folk psychology embraces everyday concepts like “beliefs”, “desires”, “fear”, and “hope”.
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
That’s a very interesting notion of what “convinced” means. It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account). I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
That is essentially what I was getting at in paragraph 4.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty. You need to be well north of 99.9% certain that there are no cars coming before you act on the assumption that there are no cars (i.e. by crossing the street). That’s the only way you can cross the street day after day for eighty years without coming to harm.
It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account).
People don’t consciously consider it, but the brain is a machine that furthers the interest of the animal, and so the brain can I think be relied upon to take costs and benefits into account in decisions, and therefore in beliefs. For example, what does it take for a person to be convinced that there are no cars coming? If people were willing to cross the street with less than 99.9% probability that there are no cars coming, we would be seeing vastly more accidents than we do. It seems clear then to me that people don’t act as if they’re convinced unless the probability is extremely high. We can tell from the infrequency of accidents, that people aren’t satisfied that there are no cars coming unless they’ve assigned an extremely high probability to it. This must be the case whatever they admit consciously.
In the meantime this does not extend to other matters. People are easily satisfied of claims about society, the economy, the government, celebrities, where the assigned probability has to be well below 99.9%.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
That’s a very difficult question to answer. I think it’s hard to know ahead of time, hard to model the hypothetical situation before it happens. But I can try to reason from analogous claims. Humans are complex, and so is their biology. So, let’s ask how much evidence it takes to convince the FDA that a drug works, that it does more good than harm. As you know, it’s quite expensive to conduct a study that would be convincing to the FDA. Now, it could be that the FDA is far too careful. So let’s suppose that the FDA is far too careful by a factor of 100. So, whatever it typically costs to prove to the FDA that a drug work, divide that by 100 to get a rough estimate of what it should take to establish whether what Andrew says is true (or false).
Estimates about the cost of developing a new drug vary widely, from a low of $800 million to nearly $2 billion per drug.
And since we’re talking clinical trials, we’re talking p-value of 5. That means that, if the drug doesn’t work at all, there’s a 1 in 20 chance that the trial will spuriously demonstrate that it works. While it depends on the particular case, my guess is that a Bayesian watching the experiment will not assign a probability all that high to the value of the drug. Add to this that even many drugs that work on average don’t work at all on an alarming fraction of patients, and the fact that the drug works is a statistical fact, not a fact about each application. So we’re not getting a high probability about the success of individual application from these expensive trials.
Dividing by 100, that’s $8 million to $20 million.
Okay, let’s divide by 100 again. That’s $80 thousand to $200 thousand.
So, now I’ve divided by ten thousand, and the cost of establishing the truth to a sufficiently high standard comes to around a hundred thousand dollars—about a year’s pay for a bright, well-educated, hard-working individual.
That doesn’t seem that unreasonable to me, because the notion of a person taking a year out of his life to check something seems not at all unusual. But what about crossing the street? It doesn’t cost a hundred thousand dollars to tell whether there are cars coming. Indeed not—but it’s a concrete fact about a specific time and place, something we can easily and inexpensively check. There are different kinds of facts, some harder than others to check. So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
But it might not. That really depends on what method a person comes up with to check the claim. Emily Rosa’s experiment on therapeutic touch is praised because it was so inexpensive and yet so conclusive. So maybe there is an inexpensive and conclusive demonstration either pro or con Andrew’s claim.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty.
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
Possibly. But asking oneself what evidence would drastically change one’s confidence in a hypothesis one way or another is a very useful exercise. I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it? How would we go about testing this assuming we had a lot of resources allocated to testing just this?
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
“Pascal’s Wager” is the name given to an argument due to Blaise Pascal for believing, or for at least taking steps to believe, in God.
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it?
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.
When dealing with the possibility of ideology influencing results one needs to be careful that one isn’t engaging in projection based on one’s own ideology influencing results. Otherwise this can turn into a fully general counter-argument.
That is true. The easy case is when clear ideological rifts can be seen even in the disputes among credentialed experts, as in economics. The much more difficult case is when there is a mainstream consensus that looks suspiciously ideological.
To use one of the possibly more amusing examples, look at Conservapedia’s labeling of the complex numbers and the axiom of choice as products of liberal ideology.
This sounds like it’s probably a hoax by hostile editors. It reminds me of the famous joke from Sokal’s hoax paper in which he described the feminist implications of the axioms of equality and choice. Come to think of it, it might even be inspired directly by Sokal’s joke.
No, the comments have been made by the project’s founder Andrew Schlafly. He’s also claimed that the Fields Medal has a liberal bias (disclaimer: that’s a link to my own blog.) Andrew also has a page labeled Counterexamples to Relativity written almost exclusively by him that claims among other things that “The theory of relativity is a mathematical system that allows no exceptions. It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
I will add to help prevent mind-killing that Conservapedia is not taken seriously by much of the American right-wing, and that this sort of extreme behavior is not limited to any specific end of the political spectrum.
1) It is plausible that an element of affirmative action could have crept into the awarding of the Fields Medal. It is not unreasonable to suspect that it has. Any number of biases might creep in to the awarding of a prize, however major it is. For example, it could well be that a disproportionate number of Norwegians or Swedes have won the Nobel relative to their accomplishments, because of location.
2) That the mathematics of relativity (either special or general) “allows no exceptions” is trivial but as far as I can see true, because it is true of any mathematical system that exceptions to the system are, pretty much by definition, not included inside the system. Anything inside the system itself is not an exception to it. So, trivial. But not false. What we really need to to do is to see why the point is brought up.
Looking further into the matter of “exceptions”, to see why he brought up the true but trivial point with respect to relativity, in the main article I found this:
The mathematics of relativity assume no exceptions, yet in the time period immediately following the origin of the universe the relativity equations could not possibly have been valid.
He appears to be saying that relativity breaks down at the Big Bang. He doesn’t appear to provide any ground for making this claim, but it seems likely. Wikipedia says something similar in its article on black holes:
Theoretically, this boundary is expected to lie around the Planck mass..., where quantum effects are expected to make the theory of general relativity break down completely.
The big bang is a singularity, and in that respect is similar to black holes, so if general relativity breaks down completely in a black hole then I would imagine it would also be likely to break down completely at the Big Bang.
3) That people have often speciously used Einstein’s relativity as a metaphor to promote all sorts of relativism is well known. People have similarly speciously used QM to promote all sorts of nonsense. So that particular point is hardly controversial, I think.
I have never relied on Conservapedia and don’t intend to start whereas I use Wikipedia several times a day, but these particular attacks on the Conservapedia seem weak.
I’m not particularly inclined towards a charitable interpretation of arguments written by Andrew Schafly. In my own short time frequenting the site, I found him rendering judgments on others’ work based on the premise that
“No facts conflict with conservative ideology
therefore, anything which conflicts with conservative ideology is not a fact.”
If you try to interpret his views in the most reasonable light you can, you probably haven’t understood him. He’s a living embodiment of Poe’s Law
Did you read the page in question or the entire quote I gave? The first sentence isn’t a big problem (although I think you aren’t parsing correctly what he’s trying to say). The second sentence I quoted was “It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
And yes, a small handful of his 33 “counterexamples” fall into genuine issues that we don’t understand and a handful (such as #33) are standard physics puzzles. Then you have things like #9 which claims that a problem with relativity is “The action-at-a-distance by Jesus, described in John 4:46-54. ” (I suppose you could argue that this is a good thing since he’s trying to make his beliefs pay rent.) And some of them are just deeply confusing such as #14 which claims that the changing mass of the standard kilogram is somehow a problem for relativity. I don’t know what exactly he’s getting at there.
But, the overarching point I was trying to make is somewhat besides the point: The problem I was illustrating was the danger in turning claims that others are being ideological into fully general counterarguments. Given the labeling of relativity as being promoted by “liberals” and the apparent conflation with moral relativism, this seems to be a fine example.
Incidentally, note that Conservapedia’s main article on relativity points out actual examples where some on the left have actually tried to make very poor analogies between general relativity and their politics, but they don’t seem to appreciate that just because someone claims that “Theory A supports my political belief B” doesn’t mean the proper response is to attack Theory A. This article also includes the interesting line “Despite censorship of dissent about relativity, evidence contrary to the theory is discussed outside of liberal universities.” This is consistent with the project’s apparent general approach, as with much in American politics, to make absolutely everything part of the great mindkilling.
I can see that he attacks relativity, devotes a disproportionate amount of space to attacks, and relatively little to an explanation, though comparing it to his article on quantum mechanics it’s not that small—his article on QM is the equivalent of a Wikipedia stub. But it’s not obvious to me that the liberalism of some of its supporters is the actual reason for the problems he has with it.
But it’s not obvious to me that the liberalism of some of its supporters is the actual reason for the problems he has with it.
It is in general difficult to tell what the “actual” motivations are for an individual’s beliefs. Often they are complicated. Regarding math and physics there’s a general pattern that Andrew doesn’t like things that are counterintuitive. I suspect that the dislike of special and general relativity comes in part from that.
It is plausible that an element of affirmative action could have crept into the awarding of the Fields Medal. It is not unreasonable to suspect that it has. Any number of biases might creep in to the awarding of a prize, however major it is. For example, it could well be that a disproportionate number of Norwegians or Swedes have won the Nobel relative to their accomplishments, because of location.
Sure. In the case of the Nobel prizes this claim has been made before. In particular, the claim is frequently made that the Nobel Prize in literature has favored northern Europeans and has had serious political overtones. There’s a strong argument that the committee has generally been unwilling to award the prize to people with extreme right-wing politics while being fine with rewarding them to those on the extreme left. Moreover, you have cases like Eyvind Johnson who got the prize despite being on the committee itself and being not well known outside Sweden. (I’m not sure if any of his major works had even been translated into English or French when he got the prize.) And every few years there’s a minor row when someone on the lit committee decides to bash US literature in general, connecting it to broad criticism of the US and its culture (see for example this).
There’s also no question that politics has played heavy roles in the awarding of the Peace Prize.
And in the sciences there has been serious allegations of sexism in the awarding of the prizes. The best source for this as far as I’m aware is “The Madame Curie Complex” by Julie Des Jardins (unfortunately it isn’t terribly well-written, at times exaggerates accomplishments of some individuals, sees patterns where they may not exist, and suffers from other problems.)
But, saying “it isn’t unreasonable to suspect X” is different from asserting X without any evidence.
But, saying “it isn’t unreasonable to suspect X” is different from asserting X without any evidence.
True, but this appears to be from a more free-wheeling, conservative-pundit blog-like section of the ’pedia, rather than from its articles. I think that once it’s understood that this section is a highly opinionated blog, the particular assertion seems to fit comfortably. For instance, right now, one of the entries reads:
Socialist England runs the 2012 Olympics, and an early warning about possible cost overruns and/or missed construction deadlines already appears
The “Socialist England” article is from the news section, and does not have an article on Conservapedia. It links to a Reuters article. It’s also nowhere near as dire as the Conservapedia headline makes it out to be.
The relativity article, and the other main articles linked on the main page, are clearly standard articles and not intended to be viewed as simple opinion blogs. It has no attribution, and lists eighteen references in the exact same manner as a Wikipedia article.
At best it is misguided, at worst it is intended to intentionally misinform people about the theory.
At the end of the article counterexamples to evolution, an old earth, and the Bible are linked to, with exactly the same format (and worse mischaracterizations than the Relativity article).
Random articles of more innocuous subjects (like book) have exactly the same format.
Again, it’s clearly the meat of the website, as more mundane articles do no more than go out of their way to add a mention of the Bible or Jesus in some way.
It is important to note here that Andrew Schlafly, founder of Conservapedia and author of most of these articles, has a degree in electrical engineering and worked as an engineer for several years before becoming a lawyer. He would not only be capable of understanding the mathematics, he would have used concepts from the theory in his professional work. At least most engineer cranks aren’t this bad.
It is important to note here that Andrew Schlafly, founder of Conservapedia and author of most of these articles, has a degree in electrical engineering and worked as an engineer for several years before becoming a lawyer. He would not only be capable of understanding the mathematics, he would have used concepts from the theory in his professional work.
In fairness to relativity crackpots, unless things have changed since my freshman days, the way special relativity is commonly taught in introductory physics courses is practically an invitation for the students to form crackpot ideas. Instead of immediately explaining the idea of the Minkowski spacetime, which reduces the whole theory almost trivially to some basic analytic geometry and calculus and makes all those so-called “paradoxes” disappear easily in a flash of insight, physics courses often take the godawful approach of grafting a mishmash of weird “effects” (like “length contraction” and “time dilatation”) onto a Newtonian intuition and then discussing the resulting “paradoxes” one by one. This approach is clearly great for pop-science writers trying to dazzle and amaze their lay audiences, but I’m at a loss to understand why it’s foisted onto students who are supposed to learn real physics.
As far as I can tell a lot of it is a hoax, though the founder may have a hard time telling which editors are creative trolls and which editors (if any) are serious.
It is periodically asserted by people claiming to be former contributors to Conservapedia that the founder simply endorses contributors who overtly support him and rejects those who overtly challenge him.
If that were true, I’d expect that editors who are willing to craft contributions that overtly support the main themes of the site get endorsed, even if their articles are absurd to the point of self-parody.
I haven’t made a study of CP, but that sounds awfully plausible to me.
You will be unsurprised to hear that CP has played out in precisely that manner: a parodist coming in, dancing on the edges of Poe and wreaking havoc by feeding Schlafly’s biases.
When dealing with the possibility of ideology influencing results one needs to be careful that one isn’t engaging in projection based on one’s own ideology influencing results. Otherwise this can turn into a fully general counter-argument. (To use one of the possibly more amusing examples, look at Conservapedia’s labeling of the complex numbers and the axiom of choice as products of liberal ideology.)
Also, an incidental note about the issue of climate change: we should expect that most aspects of climate change will be bad. Humans have developed an extremely sensitive system over the last few hundred years. We’ve settled far more territory (especially on the coasts) and have far more complicated interacting agriculture. Changing the environment in any way is a change from the status quo. Changing the status quo in any large way will be economically disruptive. Note however that there are a handful of positives to an increase in average global temperature that are clearly acknowledged in the literature. Two examples are the creation of a north-west passage, and the opening of cold areas of Russia to more productive agriculture (or in some cases, any agriculture as the permafrost melts).
Looked for it, didn’t find it. Links: Axiom of Choice. Complex Number.
http://rationalwiki.org/wiki/Conservapedia:Conservapedian_mathematics
If you are foolish enough to want to comprehend the strangeness of Conservapedia, RationalWiki is the place to go.
It looks like my memory was slightly off. The main focus is apparently on the project’s founders belief that “liberals” don’t like elementary proofs. See this discussion. I’m a bit busy right now but I’ll see if I can dig up his comments about the Axiom of Choice.
I checked that page. I don’t see any statement that “liberals” don’t like elementary proofs.
In this discussion, Andy Schlafly, to whom you are apparently referring since he appears to have control over content, is arguing with Mark Gall over the best definition of “elementary proof”. Essentially Mark believes that the definition should reflect what he believes to be common usage, and Andy believes that the definition should reflect a combination of usage and logic, ruling out certain usage as mis-usage. I think Andy is essentially identifying what he believes to be a natural kind, and believes his definition to cut nature at the joints.
Andy uses the word “liberal” in only one place, here:
“Liberal politics” here is given only as an example of error, one example among several, another example being atheism. The statement is not that liberals don’t like elementary proofs any more than that atheists don’t like elementary proofs. In fact I found no statement that anybody doesn’t like elementary proofs. Rather, the discussion appears to be about the best definition of elementary proofs, not about liking or disliking.
Also, the “talk” pages of Conservapedia, like the “talk” pages of Wikipedia, are not part of the encyclopedia proper. I think it’s incorrect, then, to say that the Conservapedia does something, when in fact it is done in the talk pages.
Ok. If you prefer, Andrew is even more blunt about his meaning here
where he says:
(End quote from Andrew).
That example seems to be pretty explicit. I agree that in general what happens on a talk page is not the same thing as what happens in the encyclopedia proper but Andrew includes this claim as one of his examples of bias in Wikipedia which is in their main space (although that doesn’t explicitly call it an example of “liberal” bias).
Okay, that’s close to what you were saying, though this seems to be a speculative hypothesis he came up with to explain the striking fact that Wikipedia did not include the entry. The important topic is the omission from Wikipedia. The explanation—that’s his attempt to understand why it happened. Many people are apt to come up with obviously highly speculative speculations when trying to explain surprising events. I don’t think all that much should be made of such things. In any case, I’m not convinced that he’s wrong. (I’m not convinced that he’s right either.)
It isn’t that surprising that we’d have that sort of thing missing. A lot of the articles I’ve written for Wikipedia are ones I only wrote because I was trying to look them up and was surprised that we didn’t have them. People don’t appreciate how many gaps Wikipedia still has. For example, until I wrote it, there was no Wikipedia article for Samuel Molyneux, who was a major historical astronomer.
Beware false compromise. The truth does not always lie in the middle. (Incidentally, are you a Bayesian? If so, around what probability do you define as being “convinced”?)
To my mind, being convinced of a claim is essentially being ready to take some action which assumes the claim is true. I think that’s the relevant threshold, I think that’s essentially how the term is used in ordinary speech. Anyway, that’s how I think I should use it.
That being the case, then whether to be convinced or not depends on costs and benefits, downsides and upsides. For example, if the upside is $1 and the downside is $100, then I will not be convinced enough to take a risky action unless I assign its success (and, therefore, the truth of statements on which its success depends) a probability greater than about 99%. But if the upside and downside are both $1 then I will readily take action even if I assign the probability slightly over 50%. (By this logic, Pascal can be convinced of God’s existence even if the probability he assigns to it is much less than 50% - which admittedly seems to represent a breakdown in my understanding of “convinced”, but I still think it works above 50%)
In the current case there are essentially no practical consequences from being right or wrong. What I find, though, is that when you take away practical consequences, most people interpret this as a license to have a great deal of confidence in all sorts of conflicting (and therefore at least half wrong) beliefs. This makes sense rationally, if we assume that the costs of having false beliefs are low and the benefits of having true beliefs are high, and in fact there’s even a stronger case for being carelessly overconfident, which is that even false beliefs, confidently asserted, can be beneficial. The benefit in question is largely a social benefit—tribal affiliation, for example.
So then, one might think, I should have little problem becoming convinced by the first claim about academic mathematicians that comes along, seeing as there is so little downside from indulging in delusion. But this does not mean that there is no downside. I think that a certain amount of harm is done to a person who has false beliefs, and whether that harm outweighs the benefit depends on what that person is doing with himself.
In any case I think that when it comes to beliefs that have important practical consequences, the harm of delusion is typically much greater than not knowing—provided one realizes that one does not know. So in practical matters it is usually better to admit ignorance than to delusionally become convinced of a randomly selected belief. For this reason, I think that in practical matters one should usually place the threshold rather high before committing oneself to some belief. So the real, everyday world typically offers us the inverse of Pascal’s wager: the price of commitment to a false belief is high, and the price of admitting one does not know (agnosticism) is (relatively) low.
If I think that I have a 10% chance of being shot today, and I wear a bulletproof vest in response, that is not the same as being convinced that I will be shot.
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
Edit:
Actually, wait a sec.
Just follow through on the fact that you noticed this.
You have only pointed out an incompleteness in my account that I already pointed out. I pointed out that below 50%, the account I gave of being convinced no longer seems to hold.
The perfect is the enemy of the good. That an account does not cover all cases does not mean the account is not on the right track. A strong attack on the account would be to offer a better account. JoshuaZ already offered an alternative account by implication, which (as I understand it) that belief is simply a constant cutoff, for example, a probability assignment above 80% is belief, or maybe 50%, or maybe 90%.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
To restate, you did something that would have been successful had those keys been your keys. To restate, you acted in a way that would have been successful had your belief been true.
And I think this is generally a principle by which we explain our actions, particularly our mistaken actions. The explanation is that we acted in a way that would have worked out had our beliefs been correct. And so, your actions reveal your beliefs. By taking your wife’s car keys, you reveal your belief that they are your car keys.
So your actions reveal your beliefs. But here’s the problem: your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits. That’s why you are more ready to take risky action when the downside is low and the upside is high, and less ready to take risky action when the downside is high and the upside is low. So your actions are a product of a combination of probability assignments and value assignments.
But your actions meanwhile are in accordance with your beliefs.
Conclusion follows: your beliefs are a product of a combination of probability assignments and value assignments.
Now, as I said, this picture is incomplete. But it seems to hold within certain limits.
A utility maximizing Bayesian doesn’t say “oh, this has the highest probability so I’ll act like that’s true.” A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?” To use an example that might help, suppose A declares that they are going to toss two standard six-sided fair die and take the sum of the two values. If anyone guesses the correct result then A will pay the guesser $10. I assign a low probability to the result being “7” but that’s still my best guess. And one can construct other situations (if for example the payoff was $1000 if one correctly guessed and the guess happened to be an even number then guessing 6 or guessing 8 makes the most sense). Does that help?
That matches my own description of what the brain does. I wrote briefly:
which I explain elsewhere in more detail, and which matches your description of the utility maximizing Bayesian. It is the combination of your probability assignments and your value assignments which produces your expected return for each course of action you might take.
Depends what you mean. You are agreeing with my account, with the exception that you are saying that this describes a “utility maximizing Bayesian”, and I am saying that it describes any brain (more or less). That is, I think that brains work more or less in accordance with Bayesian principles, at least in certain areas. I can’t think that the brain’s calculation is tremendously precise, but I expect that it good enough for survival.
Here’s a simple idea: everything we do is an action. To speak is to do something. Therefore speech is an action. Speech is declaration of belief. So declaration of belief is an action.
Now, let us consider what CuSithBell says:
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
The utility of a declaration depends on its context, on how the declaration is used. And declarations are used. We make assertions, draw inferences, and consequently, act. So our actions depend on our statements. So our statements must be adjusted to the actions that depend on them. If someone is considering a highly risky undertaking, then we will avoid making assertions of belief unless our probability assignments are very high.
Maybe people have noticed this. People adjusting their statements, even retracting certain assertions of belief, once they discover that those statements are going to be put to a more risky use than they had thought. Maybe they have noticed it and believed it to be an inconsistency? No—it’s not an inconsistency. It’s a natural consequence of the process by which we decide where the threshold is. Here’s a bit of dialog:
Bob: There are no such thing as ghosts.
Max: Let’s stay in this haunted house overnight.
Bob: Forget it!
Max: Why not?
Bob: Ghosts!
For one purpose (which involves no personal downside), Bob declares a disbelief in ghosts. For another purpose (which involves a significant personal downside if he’s wrong), Bob revises his statement. Here’s another one:
Bob: Bullets please. My revolver is empty.
Max: How do you know?
Bob: How do you think I know?
Max: Point it at your head and pull the trigger.
Bob: No!
Max: Why not?
Bob: Why do you think?
For one purpose (getting bullets), the downside is small, so Bob has no trouble saying that he knows his revolver is empty. For the other purpose, the downside is enormous, so Bob does not say that he knows it’s empty.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Okay, this makes sense, though I think I’d use ‘belief’ differently.
What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
That’s a very interesting notion of what “convinced” means. It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account). I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
That is essentially what I was getting at in paragraph 4.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty. You need to be well north of 99.9% certain that there are no cars coming before you act on the assumption that there are no cars (i.e. by crossing the street). That’s the only way you can cross the street day after day for eighty years without coming to harm.
People don’t consciously consider it, but the brain is a machine that furthers the interest of the animal, and so the brain can I think be relied upon to take costs and benefits into account in decisions, and therefore in beliefs. For example, what does it take for a person to be convinced that there are no cars coming? If people were willing to cross the street with less than 99.9% probability that there are no cars coming, we would be seeing vastly more accidents than we do. It seems clear then to me that people don’t act as if they’re convinced unless the probability is extremely high. We can tell from the infrequency of accidents, that people aren’t satisfied that there are no cars coming unless they’ve assigned an extremely high probability to it. This must be the case whatever they admit consciously.
In the meantime this does not extend to other matters. People are easily satisfied of claims about society, the economy, the government, celebrities, where the assigned probability has to be well below 99.9%.
That’s a very difficult question to answer. I think it’s hard to know ahead of time, hard to model the hypothetical situation before it happens. But I can try to reason from analogous claims. Humans are complex, and so is their biology. So, let’s ask how much evidence it takes to convince the FDA that a drug works, that it does more good than harm. As you know, it’s quite expensive to conduct a study that would be convincing to the FDA. Now, it could be that the FDA is far too careful. So let’s suppose that the FDA is far too careful by a factor of 100. So, whatever it typically costs to prove to the FDA that a drug work, divide that by 100 to get a rough estimate of what it should take to establish whether what Andrew says is true (or false).
The first article I found says:
And since we’re talking clinical trials, we’re talking p-value of 5. That means that, if the drug doesn’t work at all, there’s a 1 in 20 chance that the trial will spuriously demonstrate that it works. While it depends on the particular case, my guess is that a Bayesian watching the experiment will not assign a probability all that high to the value of the drug. Add to this that even many drugs that work on average don’t work at all on an alarming fraction of patients, and the fact that the drug works is a statistical fact, not a fact about each application. So we’re not getting a high probability about the success of individual application from these expensive trials.
Dividing by 100, that’s $8 million to $20 million.
Okay, let’s divide by 100 again. That’s $80 thousand to $200 thousand.
So, now I’ve divided by ten thousand, and the cost of establishing the truth to a sufficiently high standard comes to around a hundred thousand dollars—about a year’s pay for a bright, well-educated, hard-working individual.
That doesn’t seem that unreasonable to me, because the notion of a person taking a year out of his life to check something seems not at all unusual. But what about crossing the street? It doesn’t cost a hundred thousand dollars to tell whether there are cars coming. Indeed not—but it’s a concrete fact about a specific time and place, something we can easily and inexpensively check. There are different kinds of facts, some harder than others to check. So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
But it might not. That really depends on what method a person comes up with to check the claim. Emily Rosa’s experiment on therapeutic touch is praised because it was so inexpensive and yet so conclusive. So maybe there is an inexpensive and conclusive demonstration either pro or con Andrew’s claim.
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Possibly. But asking oneself what evidence would drastically change one’s confidence in a hypothesis one way or another is a very useful exercise. I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it? How would we go about testing this assuming we had a lot of resources allocated to testing just this?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.
JoshuaZ:
That is true. The easy case is when clear ideological rifts can be seen even in the disputes among credentialed experts, as in economics. The much more difficult case is when there is a mainstream consensus that looks suspiciously ideological.
This sounds like it’s probably a hoax by hostile editors. It reminds me of the famous joke from Sokal’s hoax paper in which he described the feminist implications of the axioms of equality and choice. Come to think of it, it might even be inspired directly by Sokal’s joke.
No, the comments have been made by the project’s founder Andrew Schlafly. He’s also claimed that the Fields Medal has a liberal bias (disclaimer: that’s a link to my own blog.) Andrew also has a page labeled Counterexamples to Relativity written almost exclusively by him that claims among other things that “The theory of relativity is a mathematical system that allows no exceptions. It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
I will add to help prevent mind-killing that Conservapedia is not taken seriously by much of the American right-wing, and that this sort of extreme behavior is not limited to any specific end of the political spectrum.
1) It is plausible that an element of affirmative action could have crept into the awarding of the Fields Medal. It is not unreasonable to suspect that it has. Any number of biases might creep in to the awarding of a prize, however major it is. For example, it could well be that a disproportionate number of Norwegians or Swedes have won the Nobel relative to their accomplishments, because of location.
2) That the mathematics of relativity (either special or general) “allows no exceptions” is trivial but as far as I can see true, because it is true of any mathematical system that exceptions to the system are, pretty much by definition, not included inside the system. Anything inside the system itself is not an exception to it. So, trivial. But not false. What we really need to to do is to see why the point is brought up.
Looking further into the matter of “exceptions”, to see why he brought up the true but trivial point with respect to relativity, in the main article I found this:
He appears to be saying that relativity breaks down at the Big Bang. He doesn’t appear to provide any ground for making this claim, but it seems likely. Wikipedia says something similar in its article on black holes:
The big bang is a singularity, and in that respect is similar to black holes, so if general relativity breaks down completely in a black hole then I would imagine it would also be likely to break down completely at the Big Bang.
3) That people have often speciously used Einstein’s relativity as a metaphor to promote all sorts of relativism is well known. People have similarly speciously used QM to promote all sorts of nonsense. So that particular point is hardly controversial, I think.
I have never relied on Conservapedia and don’t intend to start whereas I use Wikipedia several times a day, but these particular attacks on the Conservapedia seem weak.
I’m not particularly inclined towards a charitable interpretation of arguments written by Andrew Schafly. In my own short time frequenting the site, I found him rendering judgments on others’ work based on the premise that
“No facts conflict with conservative ideology
therefore, anything which conflicts with conservative ideology is not a fact.”
If you try to interpret his views in the most reasonable light you can, you probably haven’t understood him. He’s a living embodiment of Poe’s Law
Did you read the page in question or the entire quote I gave? The first sentence isn’t a big problem (although I think you aren’t parsing correctly what he’s trying to say). The second sentence I quoted was “It is heavily promoted by liberals who like its encouragement of relativism and its tendency to mislead people in how they view the world.”
And yes, a small handful of his 33 “counterexamples” fall into genuine issues that we don’t understand and a handful (such as #33) are standard physics puzzles. Then you have things like #9 which claims that a problem with relativity is “The action-at-a-distance by Jesus, described in John 4:46-54. ” (I suppose you could argue that this is a good thing since he’s trying to make his beliefs pay rent.) And some of them are just deeply confusing such as #14 which claims that the changing mass of the standard kilogram is somehow a problem for relativity. I don’t know what exactly he’s getting at there.
But, the overarching point I was trying to make is somewhat besides the point: The problem I was illustrating was the danger in turning claims that others are being ideological into fully general counterarguments. Given the labeling of relativity as being promoted by “liberals” and the apparent conflation with moral relativism, this seems to be a fine example.
Incidentally, note that Conservapedia’s main article on relativity points out actual examples where some on the left have actually tried to make very poor analogies between general relativity and their politics, but they don’t seem to appreciate that just because someone claims that “Theory A supports my political belief B” doesn’t mean the proper response is to attack Theory A. This article also includes the interesting line “Despite censorship of dissent about relativity, evidence contrary to the theory is discussed outside of liberal universities.” This is consistent with the project’s apparent general approach, as with much in American politics, to make absolutely everything part of the great mindkilling.
I can see that he attacks relativity, devotes a disproportionate amount of space to attacks, and relatively little to an explanation, though comparing it to his article on quantum mechanics it’s not that small—his article on QM is the equivalent of a Wikipedia stub. But it’s not obvious to me that the liberalism of some of its supporters is the actual reason for the problems he has with it.
It is in general difficult to tell what the “actual” motivations are for an individual’s beliefs. Often they are complicated. Regarding math and physics there’s a general pattern that Andrew doesn’t like things that are counterintuitive. I suspect that the dislike of special and general relativity comes in part from that.
Sure. In the case of the Nobel prizes this claim has been made before. In particular, the claim is frequently made that the Nobel Prize in literature has favored northern Europeans and has had serious political overtones. There’s a strong argument that the committee has generally been unwilling to award the prize to people with extreme right-wing politics while being fine with rewarding them to those on the extreme left. Moreover, you have cases like Eyvind Johnson who got the prize despite being on the committee itself and being not well known outside Sweden. (I’m not sure if any of his major works had even been translated into English or French when he got the prize.) And every few years there’s a minor row when someone on the lit committee decides to bash US literature in general, connecting it to broad criticism of the US and its culture (see for example this).
There’s also no question that politics has played heavy roles in the awarding of the Peace Prize.
And in the sciences there has been serious allegations of sexism in the awarding of the prizes. The best source for this as far as I’m aware is “The Madame Curie Complex” by Julie Des Jardins (unfortunately it isn’t terribly well-written, at times exaggerates accomplishments of some individuals, sees patterns where they may not exist, and suffers from other problems.)
But, saying “it isn’t unreasonable to suspect X” is different from asserting X without any evidence.
Isn’t this a bit like saying “politics has played a heavy role in electing the President of the United States?” The Peace Prize is a political award.
True, but this appears to be from a more free-wheeling, conservative-pundit blog-like section of the ’pedia, rather than from its articles. I think that once it’s understood that this section is a highly opinionated blog, the particular assertion seems to fit comfortably. For instance, right now, one of the entries reads:
Socialist England! Not enough to say “England”.
The “Socialist England” article is from the news section, and does not have an article on Conservapedia. It links to a Reuters article. It’s also nowhere near as dire as the Conservapedia headline makes it out to be.
The relativity article, and the other main articles linked on the main page, are clearly standard articles and not intended to be viewed as simple opinion blogs. It has no attribution, and lists eighteen references in the exact same manner as a Wikipedia article.
At best it is misguided, at worst it is intended to intentionally misinform people about the theory.
At the end of the article counterexamples to evolution, an old earth, and the Bible are linked to, with exactly the same format (and worse mischaracterizations than the Relativity article).
Random articles of more innocuous subjects (like book) have exactly the same format.
Again, it’s clearly the meat of the website, as more mundane articles do no more than go out of their way to add a mention of the Bible or Jesus in some way.
Ouch. I’ve never read more than one or two Conservapedia articles before, and I didn’t know it was that bad.
Conservapedia is so gibberingly insane it inspired the creation of RationalWiki. (Which has its bouts of reversed stupidity.)
http://rationalwiki.org/wiki/Conservapedia:Conservapedian_relativity came to some prominence last year when Prof Brian Cox discovered the Conservapedia article, then getting some blogosphere interest.
It is important to note here that Andrew Schlafly, founder of Conservapedia and author of most of these articles, has a degree in electrical engineering and worked as an engineer for several years before becoming a lawyer. He would not only be capable of understanding the mathematics, he would have used concepts from the theory in his professional work. At least most engineer cranks aren’t this bad.
David_Gerard:
In fairness to relativity crackpots, unless things have changed since my freshman days, the way special relativity is commonly taught in introductory physics courses is practically an invitation for the students to form crackpot ideas. Instead of immediately explaining the idea of the Minkowski spacetime, which reduces the whole theory almost trivially to some basic analytic geometry and calculus and makes all those so-called “paradoxes” disappear easily in a flash of insight, physics courses often take the godawful approach of grafting a mishmash of weird “effects” (like “length contraction” and “time dilatation”) onto a Newtonian intuition and then discussing the resulting “paradoxes” one by one. This approach is clearly great for pop-science writers trying to dazzle and amaze their lay audiences, but I’m at a loss to understand why it’s foisted onto students who are supposed to learn real physics.
I thought Conservapedia as a whole was a hoax. Poe’s law...
As far as I can tell a lot of it is a hoax, though the founder may have a hard time telling which editors are creative trolls and which editors (if any) are serious.
It is periodically asserted by people claiming to be former contributors to Conservapedia that the founder simply endorses contributors who overtly support him and rejects those who overtly challenge him.
If that were true, I’d expect that editors who are willing to craft contributions that overtly support the main themes of the site get endorsed, even if their articles are absurd to the point of self-parody.
I haven’t made a study of CP, but that sounds awfully plausible to me.
You will be unsurprised to hear that CP has played out in precisely that manner: a parodist coming in, dancing on the edges of Poe and wreaking havoc by feeding Schlafly’s biases.
I am hereby stealing the phrase “Dancing on the edge of Poe.”
I figured I should let you know.
So very true. :)