That’s a very interesting notion of what “convinced” means. It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account). I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
That is essentially what I was getting at in paragraph 4.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty. You need to be well north of 99.9% certain that there are no cars coming before you act on the assumption that there are no cars (i.e. by crossing the street). That’s the only way you can cross the street day after day for eighty years without coming to harm.
It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account).
People don’t consciously consider it, but the brain is a machine that furthers the interest of the animal, and so the brain can I think be relied upon to take costs and benefits into account in decisions, and therefore in beliefs. For example, what does it take for a person to be convinced that there are no cars coming? If people were willing to cross the street with less than 99.9% probability that there are no cars coming, we would be seeing vastly more accidents than we do. It seems clear then to me that people don’t act as if they’re convinced unless the probability is extremely high. We can tell from the infrequency of accidents, that people aren’t satisfied that there are no cars coming unless they’ve assigned an extremely high probability to it. This must be the case whatever they admit consciously.
In the meantime this does not extend to other matters. People are easily satisfied of claims about society, the economy, the government, celebrities, where the assigned probability has to be well below 99.9%.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
That’s a very difficult question to answer. I think it’s hard to know ahead of time, hard to model the hypothetical situation before it happens. But I can try to reason from analogous claims. Humans are complex, and so is their biology. So, let’s ask how much evidence it takes to convince the FDA that a drug works, that it does more good than harm. As you know, it’s quite expensive to conduct a study that would be convincing to the FDA. Now, it could be that the FDA is far too careful. So let’s suppose that the FDA is far too careful by a factor of 100. So, whatever it typically costs to prove to the FDA that a drug work, divide that by 100 to get a rough estimate of what it should take to establish whether what Andrew says is true (or false).
Estimates about the cost of developing a new drug vary widely, from a low of $800 million to nearly $2 billion per drug.
And since we’re talking clinical trials, we’re talking p-value of 5. That means that, if the drug doesn’t work at all, there’s a 1 in 20 chance that the trial will spuriously demonstrate that it works. While it depends on the particular case, my guess is that a Bayesian watching the experiment will not assign a probability all that high to the value of the drug. Add to this that even many drugs that work on average don’t work at all on an alarming fraction of patients, and the fact that the drug works is a statistical fact, not a fact about each application. So we’re not getting a high probability about the success of individual application from these expensive trials.
Dividing by 100, that’s $8 million to $20 million.
Okay, let’s divide by 100 again. That’s $80 thousand to $200 thousand.
So, now I’ve divided by ten thousand, and the cost of establishing the truth to a sufficiently high standard comes to around a hundred thousand dollars—about a year’s pay for a bright, well-educated, hard-working individual.
That doesn’t seem that unreasonable to me, because the notion of a person taking a year out of his life to check something seems not at all unusual. But what about crossing the street? It doesn’t cost a hundred thousand dollars to tell whether there are cars coming. Indeed not—but it’s a concrete fact about a specific time and place, something we can easily and inexpensively check. There are different kinds of facts, some harder than others to check. So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
But it might not. That really depends on what method a person comes up with to check the claim. Emily Rosa’s experiment on therapeutic touch is praised because it was so inexpensive and yet so conclusive. So maybe there is an inexpensive and conclusive demonstration either pro or con Andrew’s claim.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty.
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
Possibly. But asking oneself what evidence would drastically change one’s confidence in a hypothesis one way or another is a very useful exercise. I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it? How would we go about testing this assuming we had a lot of resources allocated to testing just this?
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
“Pascal’s Wager” is the name given to an argument due to Blaise Pascal for believing, or for at least taking steps to believe, in God.
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it?
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.
That’s a very interesting notion of what “convinced” means. It seems far from what most people would say (I don’t think that term when generally used takes the pay-off into account). I would however suggest that a delusion about a major branch of academia could potentially have serious results unless the belief is very carefully compartmentalized from impacting other beliefs.
I’m curious, given this situation, what evidence would you consider sufficient to convince you that Andrew is right? What evidence would convince you that Andrew is wrong?
That is essentially what I was getting at in paragraph 4.
This supports my position. While delusion is low-cost for most people (as I explain in paragraph 3), it is not low-cost for everyone (as I explain in paragraph 4). When delusion is high-cost, then a good strategy is to avoid commitment, to admit ignorance, when the assigned probability is below a high threshold. Paragraph 5 says that this is usually true of facts critical to the success of everyday actions. For example, crossing the street: it is a good idea to look carefully both ways before crossing a street. It’s not enough to be 90% sure that there are no cars coming close enough to run over you. That is insufficiently high, because you’ll be run over within days if you cross the street with such a low level of certainty. You need to be well north of 99.9% certain that there are no cars coming before you act on the assumption that there are no cars (i.e. by crossing the street). That’s the only way you can cross the street day after day for eighty years without coming to harm.
People don’t consciously consider it, but the brain is a machine that furthers the interest of the animal, and so the brain can I think be relied upon to take costs and benefits into account in decisions, and therefore in beliefs. For example, what does it take for a person to be convinced that there are no cars coming? If people were willing to cross the street with less than 99.9% probability that there are no cars coming, we would be seeing vastly more accidents than we do. It seems clear then to me that people don’t act as if they’re convinced unless the probability is extremely high. We can tell from the infrequency of accidents, that people aren’t satisfied that there are no cars coming unless they’ve assigned an extremely high probability to it. This must be the case whatever they admit consciously.
In the meantime this does not extend to other matters. People are easily satisfied of claims about society, the economy, the government, celebrities, where the assigned probability has to be well below 99.9%.
That’s a very difficult question to answer. I think it’s hard to know ahead of time, hard to model the hypothetical situation before it happens. But I can try to reason from analogous claims. Humans are complex, and so is their biology. So, let’s ask how much evidence it takes to convince the FDA that a drug works, that it does more good than harm. As you know, it’s quite expensive to conduct a study that would be convincing to the FDA. Now, it could be that the FDA is far too careful. So let’s suppose that the FDA is far too careful by a factor of 100. So, whatever it typically costs to prove to the FDA that a drug work, divide that by 100 to get a rough estimate of what it should take to establish whether what Andrew says is true (or false).
The first article I found says:
And since we’re talking clinical trials, we’re talking p-value of 5. That means that, if the drug doesn’t work at all, there’s a 1 in 20 chance that the trial will spuriously demonstrate that it works. While it depends on the particular case, my guess is that a Bayesian watching the experiment will not assign a probability all that high to the value of the drug. Add to this that even many drugs that work on average don’t work at all on an alarming fraction of patients, and the fact that the drug works is a statistical fact, not a fact about each application. So we’re not getting a high probability about the success of individual application from these expensive trials.
Dividing by 100, that’s $8 million to $20 million.
Okay, let’s divide by 100 again. That’s $80 thousand to $200 thousand.
So, now I’ve divided by ten thousand, and the cost of establishing the truth to a sufficiently high standard comes to around a hundred thousand dollars—about a year’s pay for a bright, well-educated, hard-working individual.
That doesn’t seem that unreasonable to me, because the notion of a person taking a year out of his life to check something seems not at all unusual. But what about crossing the street? It doesn’t cost a hundred thousand dollars to tell whether there are cars coming. Indeed not—but it’s a concrete fact about a specific time and place, something we can easily and inexpensively check. There are different kinds of facts, some harder than others to check. So the question is, what kind of fact is Andrew’s claim? My sense of it is that it belongs to the category of difficult-to-check.
But it might not. That really depends on what method a person comes up with to check the claim. Emily Rosa’s experiment on therapeutic touch is praised because it was so inexpensive and yet so conclusive. So maybe there is an inexpensive and conclusive demonstration either pro or con Andrew’s claim.
Ah, I think I see the problem. It seems that you acting under the assumption that conscious declaration of being “convinced” should cause you to act like the claim in question has probability 1. Thus, one shouldn’t say one is “convinced” unless one has a lot of evidence. May I suggest that you are possibly confusing cognitive biases with epistemology?
Possibly. But asking oneself what evidence would drastically change one’s confidence in a hypothesis one way or another is a very useful exercise. I would hesitantly suggest that for most questions if one can’t conceive easily of what such evidence would look like then one probably hasn’t thought much about the matter.
So, say math had some terribly strong political bias, what would we expect? Do we see that? Do we not see it? How would we go about testing this assuming we had a lot of resources allocated to testing just this?
Not at all. In fact I pointed out that my account of being “convinced” is continuous with Pascal’s Wager, and Pascal argued in favor of believing on the basis of close to zero probability. As the Stanford Encyclopedia introduces the wager:
Everyone is familiar with it of course. I only quote the Stanford to point out that it was in fact about “believing”. And of course nobody gets into heaven without believing. So Pascal wasn’t talking about merely making a bet without an accompanying belief. He was talking about, must have been talking about, belief, must have been saying you should believe in God even though there is no evidence of God.
The issue is two-fold: whether mathematicians are less interested in elementary proofs than before, and if they are, why. So, how would you go about checking to see whether mathematicians are less interested in elementary proofs? What if they do fewer elementary proofs? But it might be because there aren’t elementary proofs to do. So you would need to deal with that possibility. How would you do that? Would you survey mathematicians? But the survey would give little confidence to someone who suspect mathematicians of being less interested.
As part of the reason “why”, one possible answer is, “because elementary proofs aren’t that important, really.” I mean, it might be the right thing. How would I know whether it was the right thing? I’m not sure. I’m not sure that it’s not a matter of preference. Well, maybe elementary proofs have a better track record of not ultimately being overturned. How would we check that? Sounds hard.
Well, as I recall, his actual claim was that liberalism causes mathematicians to evade accountability, and part of that evasion is abandoning the search for elementary proofs. So one question to ask is whether liberalism causes a person to evade accountability. There is a lot about liberalism that can arguably be connected to evasion of personal accountability. The specific question is whether liberalism would cause mathematicians to evade mathematical accountability—that is, accountability in accordance with traditional standards of mathematics. If so, this would be part of a more general tendency of liberal academics, liberal thinkers, to seek to avoid personal accountability.
In order to answer this I really think we need to come up with an account of what, exactly, liberalism is. A lot of people have put a lot of work into coming up with an account of what liberalism is, and each person comes up with a different account. For example there is Thomas Sowell’s account of liberals in his Conflict of Visions.
What, exactly, liberalism is, would greatly affect the answer to the question of whether liberalism accounts for the avoidance (if it exists) of personal accountability.
I will go ahead and give you just one, highly speculative, account of liberalism and its effect on academia. Here goes. Liberalism is the ideology of a certain class of people, and the ideology grows in part out of the class. We can think of it as a religion, which is somewhat adapted to the people it occurs in, just as Islam is (presumably) somewhat adapted to the Middle East, and so on. Among other things, liberalism extols bureaucracy, such as by preferring regulation of the marketplace, which is rule by bureaucrats over the economy. This is in part connected to the fact that liberalism is the ideology of bureaucrats. However, internally, bureaucracy grows in accordance with a logic that is connected to the evasion of personal responsibility by bureaucrats. If somebody does something foolish and gets smacked for it, the bureaucratic response is to establish strict rules to which all must adhere. Now the next time something foolish is done, the person can say, “I’m following the rules”, which he is. It is the rules which are foolish. But the rules aren’t any person. They can’t be smacked. Voila—evasion of personal responsibility. This is just one tiny example.
So, to recap, liberalism is the ideology of bureaucracy, and extols bureaucracy, and bureaucracy is in no small part built around the ideal of the avoidance of personal responsibility. One is, of course, still accountable in some way—but the nature of the accountability is radically different. One is now accountable for following the intricate rules of the bureaucracy to the letter. One is not personally accountable for the real-world disasters that are produced by bureaucracy which has gone on too long.
The liberal mindset, then, is the bureaucratic mindset, and the bureaucratic mindset revolves around the evasion of personal accountability, at least has a strong element of evasion.
Now we get to the universities. The public universities are already part of the state. The professors work for the state. They are bureaucratized. What about private universities? They are also largely connected with the state, especially insofar as professors get grants from the state. Long story short, academic science has turned into a vast bureaucracy, scientists have turned into bureaucrats. Scientific method has been replaced by such things as “peer review”, which is a highly bureaucratized review by anonymous (and therefore unaccountable) peers. Except that the peers are accountable—though not to the truth. They are accountable to each other and to the writers they are reviewing, much as individual departments within a vast bureaucracy are filled with people who are accountable—to each other. What we get is massive amounts of groupthink, echo chamber, nobody wanting to rock the boat, same as we get in bureaucracy.
So now we get to mathematicians.
Within a bureaucracy, your position is safe and your work is easy. There are rules, probably intricate rules, but as long as you follow the rules, and as long as you’re a team player, you can survive. You don’t actually have to produce anything valuable. The rules are originally intended to guide the production of valuable goods, but in the end, just as industries capture their regulatory authority, so do bureaucrats capture the rules they work under. So they push a lot of paper but accomplish nothing.
I mean, here’s a prediction from this theory: we should see a lot of trivial papers published, papers that don’t really advance the field in any significant way but merely add to the count of papers published.
And in fact this is what we see. So the theory is confirmed! Not so fast—I already knew about the academic paper situation, so maybe I concocted a theory that was consistent with this.
It seems that Pascal’s Wager is a particularly difficult example to work with since it involves a hypothesis entity that actively rewards one for giving a higher probability assignment to that hypothesis.
I’m not sure what a good definition of “liberalism” is but the definition you use seems to mean something closer to bureaucratic authoritarianism which obviously isn’t the same given that most self-identified liberals want less government involvement in many family related issues (i.e. gay marriage). It is likely that there is no concise definition of these sorts of terms since what policy attitudes are common is to a large extent a product of history and social forces rather than coherent ideology.
Well, nice of you for admitting that you already new this. But, at the same time, this seems to be a terribly weak prediction even if one didn’t know about it. One expects as fields advance and there becomes less low-hanging fruit that more and more seemingly minor papers will be published (I’m not sure there are many papers published which are trivial, minor and trivial are not the same thing).
Mm. I’m not quite sure this is true. Many liberals I know are perfectly content with the level of government involvement in (for example) marriage—we just want the nature of that involvement to not discriminate against (for example) gays.
Almost all hypotheses have this property. If you’re really in event X, then you’d be better off believing that you’re in X.
I think what Joshua meant was that the situation rewards the belief directly rather than the actions taken as a result of the belief, as is more typical.
Yes, but there was no explanation of why it’s “particularly difficult”, and the only property listed as justifying this characterization is almost universally present everywhere, including the cases that are not at all difficult. I pointed out how this property doesn’t work as an explanation.
I think the phrase “entity that actively rewards one for giving a higher probability...” made the point clear enough. If my state of information implies a 1% probability that a large asteroid will strike Earth in the next fifty years, then I would be best off assigning 1% probability to that, because the asteroid’s behaviour isn’t hypothesized to depend at all on my beliefs about it. If my state of information implies a 1% probability that there is a God who will massively reward only those who believe in his existence with 100% certainty, and who will punish all others, then that’s an entity that’s actively rewarding certain people based on having overconfident probability assignments; so the difficulty is in the possibility and desirability of treating one’s own probability assignments as just another thing to make decisions about.
I understand where the difficulty comes from, my complaint was with justification of the presence of the difficulty given in Joshua’s comment. Maybe you’re right, and the onus of justification was on the word “actively”, even though it wasn’t explained.
Let belief A include “having at least .9 belief in A has a great outcome, independant of actions”, where the great outcome in question is worth a dominating amount of utility. If an agent somehow gets into the epistemic state of having .5 belief in A, (and not having any opposing beliefs of direct punishments for believing A), (and updating its beliefs without evidence is an available action), it will update to have .9 belief in A. If it encounters evidence against A that wouldn’t reduce the probability low enough to counter the dominating utility of the great outcome, it would ignore it. If it does not keep a record of evidence it processed, just updating incrementally, it would not notice that if it accumulates enough evidence to discard A.
Of course, this illustration of the problem depends on the agent having certain heuristics and biases.
This is a good start, but on Conservapedia “liberal” and “liberalism” are pretty much local jargon and their meanings have departed the normative usages in the real world. It is not overstating the case to say that Schlafly uses “liberal” to mean pretty much anything he doesn’t like.