Wait. I’m reading that again.
“This now?”
What emergency caused him to tap out using his time-turner on Thursday, anyway?
Wait. I’m reading that again.
“This now?”
What emergency caused him to tap out using his time-turner on Thursday, anyway?
This is one of those things that frustrates me. There should be a nice concise word for the sorts of materials that are being collected on this site that doesn’t require a ton of clarification, doesn’t invoke whatever demons have caused ‘rationalism’ to be associated with anti-feminism in some circles, or suggest an innate contradiction with the useful pieces of empirical practice.
How many times in the past few months have I tried to express to someone, “I’ve been learning all sorts of interesting things about cognitive defects and undesirable human behavioral phenomena, applications of Bayes theorem, and other collected topics concerning epistemology in an effort to be less crazy, help others be less crazy, and be a more effective human”—or something along those lines? If for no other reason than brevity, I would love to have a word that helps succinctly express that, or something reasonably close to it, such that if you look into the word, you understand that it covers this type of interest.
Incidentally, if anyone can tell me what that “rationalism is anti-woman” thing is about, I’d really love to know. I don’t see any particularly compelling reason that ‘traditional rationalism’ should be associated with anti-feminism (except as expressed by anti-feminists who also happened to subscribe to that philosophy, and even then, I don’t know who those people are.)
I think that statement right there is the crux of it.
I have mixed feelings on Clausewitz, but one thing I that did seep in from my first read of On War was that it is very hard to achieve success (let alone measure it) if you don’t have a clear idea of what your goal is. “Kill lots of enemy” is not particularly good goal.
Unconventional advice for an 8 year old. Distinct from advice for a parent.
Study humans and take notes on their behavior, because when you are older it may be hard to understand what it was like to be a kid.
Recognize strengths in others that surprise you. One of the ones that eluded this 31 year-old for 29 years is that interest in a subject is a variable that you are capable of controlling, and helps a lot with being good at a subject.
Teach others, and give others an opportunity to teach, because that is a social skill that will provide value in the settings you are likely to wind up in.
Listen to authority, but ever with a critical ear.
Play and design games with peers until your tastes are refined enough to interest adults. Seek out people who can make you better at playing and designing games.
Read books, and ignore anything your parents say about ‘bed time’. Bed time is the part of the night where the big lights go off and the smaller lights go on, and you read stories, try drawing games that Vihart (youtube) inspired you to try, consider your place in the universe, and engage in dialogue with yourself on the things that confuse and trouble you.
Get used to the idea of existential crisis, and don’t shy away from ideas that scare you, though be aware that others do shy away from them, and that if you talk a lot about the things that scare you, people usually will think you are crazy or troubled.
And I’ll throw in a vote for anyone who says anything in favor of learning languages, because (whatever the benefits of learning languages are, developmentally) they broaden the amount of information that is accessible.
Better yet—explain it to those of us who don’t have an encyclopedic knowledge of chess.
My extremely limited understanding suggests that this is an unpopular opening (and that barring a well-analyzed play-history that suggested it was favorable) Kasparov is unlikely to open with P-KN4 against an equal opponent, so RYK opening with that suggests we got lucky and drew one of the 1/1000 “let’s try something fun” games.
I rank it as very possible that my extremely limited understanding is wildly incorrect, but with cryptic comments like these, I stand to benefit little from this insight, if correct.
I’m not sure I’d call Russia winner in this war. It seems like having been unlucky enough to have been involved is already some flavor of losing.
I respect the insight though, that Team A, characterized by quality a, defeating Team B, characterized by quality b, is not a story of a beats b, especially when you’re wrong in the first place about Team A not also being characterized (in part) by quality b.
I liken this kind of talk to “fall of the Roman Empire” talk—modern humans have an eerie tendency to try to explain the past in ways that support their current viewpoints, and wilfully ignore evidence that tells them their explanations are not very fit.
OK, I’ll play.
Given the EY here in both scenarios is the active agent (EY, in flesh, and the RYK box, consisting of a synthetic EY guessing at what K does, and a system choosingly randomly based on EY-over-K predictions what the box does next)...
Yes. “never” is a strong word here. Assume when he says “never” he means, “EY thinks it is really unlikely for K to do”.
In other words, when EY sees EY-playing-K-but-Selecting-Moves-Randomly-in-proportion-with-predictions-of-K, and observes RYK has played an unexpected/unlikely move, EY concludes that RYK has probably picked a move that is evaluated to be less fit than some other move. RYK is a worse player than EY because RYK is not picking its best options. RYK is the gambler that sometimes takes the sucker bet.
I think that one of the more critical aspects (feedback) has been glossed over to a degree that I think it falls short of its goal of being a good introduction.
I suggest that some editing is in order. I don’t actively discourage another attempt; I suspect most of us have considered writing a “Why this matters and what we mean” post, and while there are other good materials on the site, more good ones probably won’t do much harm.
Is there some easily communicable message here for doomsayers that stands a decent chance of kick-starting the “Oy, was I mistaken!” part of their brains into gear?
Took the survey.
Would probably not have defected a year ago, and it would not have been an easy decision for me at that time.
I appear to be getting better at estimating.
I think the IQ questions should probably just be dropped from future tests. A number of people get tested as kids and get crazy numbers and never get tested again (since there’s no real point, and people are generally afraid of seeing that number dive, people who get a crazy number are probably less likely to retest than others). That’s a charitable explanation for the results in last year’s survey, which I didn’t take.
“Therefore, this kind of experiment can never convince me of the reality of Mrs Stewart’s ESP; not because I assert Pf=0 dogmatically at the start, but because the verifiable facts can be accounted for by many alternative hypotheses, every one of which I consider inherently more plausible than Hf, and none of which is ruled out by the information available to me.
Indeed, the very evidence which the ESP’ers throw at us to convince us, has the opposite effect on our state of belief; issuing reports of sensational data defeats its own purpose. For if the prior probability for deception is greater than that of ESP, then the more improbable the alleged data are on the null hypothesis of no deception and no ESP, the more strongly we are led to believe, not in ESP, but in deception. For this reason, the advocates of ESP (or any other marvel) will never succeed in persuading scientists that their phenomenon is real, until they learn how to eliminate the possibility of deception in the mind of the reader. As (5.15) shows, the reader’s total prior probability for deception by all mechanisms must be pushed down below that of ESP.”
ET Jaynes, Probability Theory (S 5.2.2)
I found this (and the preceding bit) noteworthy on two points; first in the obvious mathematical respect that explains the relationship between favored hypotheses and less favored hypotheses which are both supported by data;
Second, by the realization that researchers favoring ESP most likely fail to apprehend the hypothesis that they are testing, w/r to their critics. In the case in question, they collected 37100 predictions, which seems a little excessive considering it had essentially no persuasive power to skeptics.
yet our instructions are sequential and language-based.
Care to elaborate on that? Edit: OK, I realize why I was confused by this. The act of instruction in a subject, as opposed to a metaphor for elements of thought as computer instructions?
It’s more useful than that, even.
There are also times where the problem isn’t necessarily memorization, but just lapse of insight that makes it hard to realize that a problem as presented matches one of your pre-canned equations, even though it can be solved with one of them. Panic sets in, etc.
In situations like that, particularly in those years when you have calculus and various transforms in your toolkit (even if they aren’t strictly /expected/), you can solve the problem with those power tools instead, and having understood and being able to derive solutions to closely related problems from basic principles ought to be fairly predictive of you being able to generate a correct answer in those situations.
I’m rather curious;
If you take people across a big swath of humanities, and ask them about subjects where there is a substantial amount of debate and not a lot of decisive evidence—say, theories of a historical Jesus—how many of those people are going to describe one of those theories as more likely than not?
Like, if you have dozens of theories that you’ve studied and examined closely, are we going to see people assigning >50% to their favored theory? Or will people be a lot more conservative with their confidence?
Interesting data I am still digesting: I ran across this story, which summarizes the results of a study someone did to see if doctors were able to correctly interpret test results. original source. The important thing to note here is that these people are generally trained on this notion that tests have false positives and negatives and how to deal with that. The poor success of doctors to do this kind of analysis, despite being trained on it, and potentially using that skill every day, and caring about the outcome, and being in the presence of the outcome fairly often—this has raised my perception of the difficulty required to train people to learn and apply even small amounts of probability theory. Specifically, I think this provides some hint that moral or utility arguments are not going to convert the masses. I’ve been wondering if there’s a way to trick people into developing and applying skills.
Fun project: I started working on a dinner game. I’m not entirely sure what the end form of this game will look like, but right now, it involves sticking an object in a closed box, passing the box around, and having people try to determine what is in the box without looking inside it. I tried this out with my family last night; they seemed to enjoy the idea of trying to experimentally determine properties of an unseen object and then trying to figure out what it might be. They were able to pretty precisely determine the shape of the object and two were able to guess the object correctly (though one changed position upon hearing an observation from another player). Several identified unreal properties of the object, but didn’t fixate on them. Based on the success of that run, I am going to try more exotic shapes and materials in future games to see how that changes things. Ultimately, I’d like people to weigh in on propositions like, “The object is red” despite being uncertain what the object is.
I’ve started taking an online course on naturalism, and have been quite surprised by the quality of the discussion questions, which have illustrated to me that my thinking is not particularly precise about various unlikely metaphysical propositions. And speaking of metaphysical propositions and imprecise thinking, I am still looking for a good argument somewhere about how to set sensible priors for metaphysical propositions.
The speaker is a football guy, if that helps. But yes, I also find it a distasteful remark. You can improve without being in poor form in front of others (or even in private, really). And it’s pretty rare to literally NEVER lose.
What do you think of the following?
‘If the data is good, but the argument is not, argue the argument (e.g. by showing that it doesn’t hold water). Don’t argue about the conclusion and point to the bad argument as evidence.’ (not a rationality quote, just curious about your reaction)
Example?
The trope requires 2 things: 1) The woman winds up in the refrigerator (check) 2) It happens because someone is explicitly trying to get at somebody else, thus disempowering the victim, or that it serves as an empty source of motivation for a character. (???).
As readers, we don’t necessarily have confidence in criterion #2, here. Other commentators have come up with various plausible-sounding explanations for how the deed went down (sunlight resistant troll lures Hero-Hermoine to a place where her injuries can’t be detected, and systematically removes all of her defenses). So, let’s posit that the troll is a weapon explicitly sent to kill Hermoine.
The important question is who and why? If it’s to get a rise out of Harry, that’s it falls under the common heading of the trope. If it’s Mr Malfoy trying to get revenge for the destroyer of his son’s reputation, it doesn’t. I think as readers it’s difficult at this stage to be confident about these things.
Let’s say, however, that Hermoine is the woman in the refrigerator. The road to get there had her basically kicking ass and taking names the whole way there; I’m not inclined to necessarily cry foul because of it.