Dave Orr, the rite of passage is to give the correct answer, 2⁄11, in the face of pressure to conform.
Cyan2
Economic Weirdtopia: FAIth determines that the love of money actually is the root of ~75% of evil, so it’s back to the barter system for us.
Sexual Weirdtopia: FAIth determines that the separatist feminists were right—CEV requires segregation by sex. Homosexual men and lesbians laugh and laugh. Research on immersive VR becomes a preoccupation among the heterosexual majority in both segregated camps.
Not very plausible, but… “That’s the thing about FAIth. If you don’t have it, you can’t understand it. And if you do, no explanation is necessary.”
“They usually resort to the script of presuming a personal insult” instead of rightly apprehending the point you’re making, which is...?
This is the difficulty I have with your comments, Caledonian. You always leave the interesting part out. (This is not a personal insult, by the way—just a straightforward observation.)
Alexandre passos, Unknown, (and Caledonian too, from a previous thread,)
Eliezer has already stated that he’s taking a deterministic many worlds interpretation of reality as a premise (and explained at some length why he does so in the QM series). If you disagree with that premise, of course the conclusions do not necessarily follow.
I’m not defending the assumption of determinism—but I am saying that a criticism of the argument that flows from Eliezer’s premises would be more apposite and interesting than essentially posting over and over again, “Nuh uh! What if the universe isn’t deterministic, huh?”
I’m sorry, I’m probably just cranky in the morning. I’ll go drink some coffee and then start regretting posting this.
IIRC, there exist minimax strategies in some games that are stochastic. There are some games in which it is in fact best to fight randomness with randomness.
“But I still suspect that there’s a little distance there, that wouldn’t be there otherwise, and I wish my brain would stop doing that.”
A finely crafted recursion. I salute you.
“Affective death spiral” sounds like the process by which I became a militant evangelical Bayesian. But I got better: now I’m only a fundamentalist Bayesian, and my faith does not require me to witness the Bayesian Gospel to those who aren’t interested.
David, the inelegance is that the study asked adults in general to imagine parental grief rather than asking parents in particular. (Your correct observations about imagined versus actual grief were already set forth in the post.)
...none of them involve or have any use for “logic” or “reason” or Bayesian probability theory; none of these things are taught, used or applied by scientists...
Logic and reason are not taught, used, or applied by scientists—what!? I’m not sure what the scare-quotes around “logic” and “reason” are supposed to convey, but on its face, this statement is jaw-dropping.
As a working scientist, I can tell you I have fruitfully applied Bayesian probability theory, and that it has informed my entire approach to research. Don’t duplicate Eliezer’s approach and reduce science to a monolithic structure with sharply drawn boundaries.
I have a colleague who is not especially mathematically inclined. He likes to mess around in the data and try to get the most information possible out of it. Although it would surprise him to hear it, all of his scientific inferences can be understood as Bayesian reasoning. Bayesian probability theory nothing more that an explicit formulation of one of the tasks that good working scientists are trained to do—specifically, learning from data.
Nitpick for Doug S.: that’s actually two coupled evolutionary limits. Babies’ heads need to fit through the women’s pelvises, which also have to be narrow enough for useful locomotion.
Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals—including certain social emotions—dominate more than they should.
Having not read the book, I don’t know if Deacon deals with any alternative hypotheses, but one alternative I know of is the idea that WSers get augmented verbal and social skills is because it is the only cognitive skill they are able to practice. In short, WSers are (postulated to be) geniuses at social interaction because of practice, not because of brain signal imbalance. This is analogous to the augmented leg and foot dexterity of people lacking arms.
How could we test these alternatives? I seem to recall that research has been done in the temporary suppression of brain activity using EM fields (carefully, one would hope). If I haven’t misremembered, then effects of the brain signal imbalance might be subject to experimental investigation.
Tim Tyler, the thermodynamically problematic part of the Matrix is the fact that humans had induced something like nuclear winter to deny the machines the energy of the sun. Morpheus states that the machines then used humans as a source of energy. Humans get their energy from food: no sun implies no food implies no humans.
What’s the difference between calling Bayesian reasoning an “engine of accuracy” because of its information-theoretic properties as you’ve done in the past and saying that any argument based on it ought to be universally compelling?
Bayesian reasoning is an “engine of accuracy” in the same why that classical logic is an engine of accuracy. Both are conditional on accepting some initial state of information. In classical logic, conclusions follow from premises; in Bayesian reasoning, posterior probability assignments follow from prior probability assignments. An argument in classical logic need not be universally compelling: you can always deny the premises. Likewise, Bayesian reasoning doesn’t tell you which prior probabilities to adopt.
If some or all abilities are hidden at the beginning, that forces the player to choose based on incomplete knowledge, and more often that not, leads to regrets: “I wish I purchased that ability which turned out to work in nice synergy with others, and not this one which turned out to be useless..”. Especially if there’s some finite pool of resources used to purchase these abilities. And that is not fun, even if surpising.
This seems to miss the point—you’re talking about a surprise that isn’t a pleasant surprise. Suppose the game was designed so that after achieving a goal, you get an unexpected bonus ability with awesome synergy with the character, no matter how the character had been developed up to that point? As a game designer, ignoring the difficulty of realizing such a design, how would you say the Fun-theoretic potential of this scenario stacks up?
A rule of thumb in game design is to never make players make uninformed choices, as that only leads to frustration. This beats any possible pleasant surpise that might be there.
This rule of thumb is overly broad as stated. It would rule out poker, “fog of war” in RTS games, etc.
Utopia originally meant no-place, I have a hard time forgetting that meaning when people talk about them.
The term “utopia” was a deliberate pun on “outopia” meaning “no place” and “eutopia” meaning “good place”. It seems doubtful that Thomas More actually intended to depict his personal ideal society, so one might say that Utopia is the original Weirdtopia.
I think we’re looking at premature search-halts here.
I plead no contest.
Eliezer, I think you have dissolved one of the most persistent and venerable mysteries: “How is it that even the smartest people can make such stupid mistakes”.
Michael Shermer wrote about that in “Why People Believe Weird Things: Pseudoscience, Superstition, and Other Confusions of Our Time”. In the question of smart people believing weird things, he essentially describes the same process as that Eliezer experienced: once smart people decide to believe a weird thing for whatever reason, it’s much harder to to convince them that their beliefs are flawed because they are that much better at poking holes in counterarguments.
Imagine an alien civilisation that has, say, fourteen colours. Calling two adjacent ones by the same name would be as ridiculous to them as someone here calling green and yellow the same thing.
I don’t think you need alien civilizations for this. Not all human languages have color words that map 1:1 to English color words. (I seem to recall that the word for “red” in Korean includes what English speakers would call “copper”. I could be mistaken.)
The problem is this: empirically it turns out that when people first look for what is wrong with something, they tend to distort it. If they first look for what is right, they get a better view of it, and so are better able to judge what is wrong.
That’s a very interesting finding. Can I get a source?
Constant, a reply in brief:
“unkind words literally kill people dead”
Incitements to violence by leading citizens may plausibly be inferred to cause death. This is not usually classed as a bullshit inference.
“the unkind words you quote were… cherry-picked”
You say cherry-picked, I say representative of government policies that were actually carried out. Tomato, tomahto.
“native americans were on their side entirely without sin… never gave whites any reason to think of them as enemies.”
As Nick Tarleton noted, I never made that claim.
By “damning”, I meant, “worthy of condemnation as harmful, illegal, or immoral.” (That’s pretty much straight from the dictionary.)
Let me just add, genocide is something humans do—everywhere, at all times in history. (Chimps too, less efficiently.) The natives were no better or worse than the settlers, only more poorly equipped.
If there’s one thing I hate about wiggins, it’s how they use their military genius to utterly destroy their enemies, be they small children or hive-minded bug-eyed monsters.