I tried it again and it worked. Ensuring that the cursor wasn’t active in the blanks might have been the cause.
Annoyance
Poker isn’t just about calculating probabilities, it’s also about disguising your reactions and effectively reading others’. Being rational has nothing to do with competence at social interaction and deception.
A good test has no confounding variables. Poker, then, is not a good test of rationality.
An excellent point and suggestion.
Any test in which there are confounding variables should be suspect, and every attempt should be made to eliminate them. Looking at ‘winners’ isn’t useful unless we know the way in which they won indicates rationality. Lottery winners got lucky. Playing the lottery has a negative expected return. Including lottery winners in the group you scrutinize means you’re including stupid people who were the beneficiaries of a single turn of good fortune.
The questions we should be asking ourselves are: What criteria distinguish rationality from non-rationality? What criteria distinguish between degrees of rationality?
Whether a person memorizes and uses the table is still a viable test. No rational person playing to win would take an action incompatible with the table, and acting only in ways compatible with the table is unlikely to be accidental for an irrational person.
A way of determining whether people act rationally when it is relatively easy to do so can be quite valuable, since most people don’t.
An ideal rationality test would be perfectly specific: there would be no way to pass it other than being rational. We can’t conveniently create such a test, but we can at least make it difficult to pass our tests by utilizing simple procedures that don’t require rationality to implement.
Any ‘game’ in which the best strategies can be known and preset would then be ruled out. It’s relatively easy to write a computer program to play poker (minus the social interaction). Same goes for blackjack. It takes rationality to create such a program, but the program doesn’t need rationality to function.
I became an atheist fairly early, but it took me longer to realize there was no Santa Claus. The idea didn’t make sense, but the presents appeared under the tree, and my parents denied being responsible, so clearly they’d gotten there somehow. I concluded that I just didn’t understand some important part of how the world worked.
One year, we’d just moved into a new house. For the first time, we had a real fireplace, made of brick. I excitedly spoke of how this would make visiting much easier on Santa, but wondered how he could make it down a chimney at all, and began making plans to string a net of dental floss across the opening in an attempt to see how Santa dealt with the obstacle.
I had been leaning on the brickwork, looking up the flue, as I said these things, and as I turned around I intercepted a look my parents were giving each other. Translated into English, it might have said something like “Isn’t this precious?”
In that moment, I intuited that there was no Santa Claus, and that my parents had been lying to me because they thought my belief was cute.
I had already learned that not everyone was my friend. I already knew that some people who weren’t my friends actively wished to harm me. But that was the first time I really grasped the idea that my parents had goals and preferences of their own that they would choose over my welfare, that I couldn’t rely on them not to harm me for their own benefit.
Before that time, I took for granted without thinking about it that people’s stances toward things could be easily derived from what they said and did. Enemies were obvious; so were friends. Only afterwards did I really understand not only that appearances were deceiving but that people would actively create false appearances.
Instead of relying on my first impressions, I began to withhold judgment and (although I lacked the words to describe it at the time) actively seek new evidence to test my beliefs.
It would be desirable to be able to tell which comments/posts I’d already voted on once I’ve done so.
“The literary industry that I called “excellence pornography” isn’t very good at what it does. ”
No, it’s great at what it does. It’s not very good at what it represents itself as attempting.
A rational belief isn’t necessarily correct or true. Rational beliefs are justified, in that they logically follow from premises that are accepted as true. In the case of probabilistic statements, a rational strategy is one that maximizes the chance of being correct or otherwise reaching a defined goal state. It doesn’t have to work or be correct in any ultimate sense to be rational.
If I play the lottery and win, playing the lottery turned out to be a way to get lots of money. It doesn’t mean that playing the lottery was a rational strategy. If I make a reasonable investment and improbable misfortune strikes, losing the money, that doesn’t mean that the investment wasn’t rational.
Yes, it’s a vacuous truth, which is why I object to its negation being offered as a reasonable statement.
Let’s rephrase: excellence pornography is terrible at what it claims to do, but is excellent at what it is intended to do: get people to buy lots of it without ultimately reducing the market for itself.
We can think of cholera transmission (or actually, any memetic spread) as consisting of a feedback loop.
There are positive and negative feedback loops, depending on what properties we’re examining: positive loops lead to a greater and greater value of the property, while negative loops converge on some set value.
Ideally we want to set up our mental environments so that error is trapped in negative feedback loops and reduced as much as possible, while correctness is amplified. In terms of assigned probability, wrongness should go to zero and correctness to one.
The methods for bringing this about are widely known but, oddly, not widely recognized and even less widely applied. They’re called logic.
What use is it to have correct beliefs if you don’t know they’re correct?
If the belief cannot be conveniently tested empirically, or it would be useless to do so, the only way we can know that our belief is correct is by being confident of the methodology through which we reached it.
I can’t agree that it’s a good argument. Pratchett, through the character of Death, conflates the problem of constructing absolute standards with the ‘problem’ of finding material representations of complex concepts through isolating basic parts.
It’s the sort of alchemical thinking that should have been discarded with, well, alchemists. Of course you can’t grind down reality and find mercy. Can you smash a computer and find the essence of the computations it was carrying out? The very act of taking the computer apart and reducing it destroys the relationships it embodied.
Of course, you can find computation in atoms… just not the ones the computer was doing.
If you recognize that, in certain terms, believing certain things has positive instrumental results even if they’re not true, why can’t you simply abolish the false beliefs and just create those results directly?
Human brains are (loosely speaking) Universal Turing Machines—they can emulate any computation. So if we’re looking for a particular set of results, we’re not tied to a way to reach them that’s invalid. There’s always a valid path that gets us to where we want to be.
“she just did this once, etc. How did she do it? ”
By appealing to a non-rational or irrational argument that would lead the person to adopt rationality.
Arguing rationally with a person who isn’t rational that they should take up the process is a waste of time. If it would work, it wouldn’t be necessary. It’s easy to say what course should be taken with a rational person, because rational thought is all alike. Irrational thought patterns can be nearly anything, so there’s no way to specify an argument that will convince everyone. You’d need to construct an argument that each person is specifically vulnerable to.
Why would we regard an effective placebo as a victory? Why would we want our enemies to profit?
I can think of all sorts of reasons to oppose the existence of a type of person who is made more fit by delusion. Simple eugenics combined with long-term thinking would seem to suggest that we should encourage the destruction of such people.
“I naturally prefer to have a high level of confidence in my beliefs.”
Doesn’t that depend on how reliable those beliefs are?
If you’re fleeing through the temple pursued by a boulder, you don’t want to dither at an intersection, so whichever direction you think you should go at one moment should be constant. But there’s no reason why your confidence should be high to avoid dithering; you need merely be stable.
“’ll take the path I belief leads to safety. This will turn out to be a wise choice”
If, and only if, your belief is correct. If your belief is wrong your choice is a disastrous one. Rationality isn’t about being right or choosing the best course, it’s about knowing that you’re right and knowing which is the best course to choose.
You’re right, that is happening—I wouldn’t have noticed if you hadn’t pointed out the effect.
Maybe most people would notice, and I’m oblivious, but I’d recommend making the difference a bit less subtle.
But those good ol’ frontal lobes permit universal computation. We can do it. We’re just not very good at it.
If you can emulate arithmetic, the only limit is memory capacity. Ignore that issue, and you’re a UTM.
The link and comment score thresholds in the Preferences menu give the impression that by leaving them blank, all articles and comments will be shown regardless of their score.
If left blank, the preferences can’t seem to be saved, and they appear to revert to zero: nothing with a score lower than zero shows.