I don’t know what document that link originally pointed to, but this document contains one of Jaynes’s earliest (if not the earliest) descriptions of the idea.
Stephen R. Diamond, there are two distinct things in play here: (i) an assessment of the plausibility of certain statements conditional on some background knowledge; and (ii) the relative frequency of outcomes of trials in a counterfactual world in which the number of trials is very large. You’ve declared that probability can’t be (i) because it’s (ii) -- actually, the Kolmogorov axioms apply to both. Justification for using the word “probability” to refer to things of type (i) can be found in the first two chapters of this book. I personally call things of type (i) “probabilities” and things of type (ii) “relative frequencies”; the key is to recognize that they need different names.
On your further critiques:
(1) Eliezer is a determinist; see the quantum physics sequence.
(2) True. A logical argument is only as reliable as its premises, and every method for learning from empirical information is only as reliable as its inductive bias. Unfortunately, every extant practical method of learning has an inductive bias, and the no free lunch theorems give reason to believe that this is a permanent state of affairs.
I’m not sure what you mean in your last sentence...
Vilhelm S., companies and people who lose money on CDOs have mortgages to pay and employ people who have mortgages to pay… Once the system gets coupled like that, one unlucky bet can start the cascade. I’m not saying this actually happened, but it’s a mechanism which could falsify the assertion “the lack of correlation doesn’t stop being real just because people believe in it”.
David, the inelegance is that the study asked adults in general to imagine parental grief rather than asking parents in particular. (Your correct observations about imagined versus actual grief were already set forth in the post.)
This post helps to ease much of what I have found frustrating in the task of understanding the implications of EP.
Huh. I guess I just don’t see Angel (the TV character, not the commenter) as the equivalent of the verthandi. (Also naming the idea after the actor instead of the character lead me somewhat astray.)
If you google boreana you should get an idea of where that term comes from, same as verthandi.
Still need a little help. Top hits appear to be David Boreanaz, a plant in the Rue family, and a moth.
No. I asserted that...
This might be a good idea… At this point, the “hedonic impact” of this mechanic will almost disappear.
I don’t disagree with this. My scenario is premised on the reward being a surprise, so it implicitly assumes one-time use, or at least no overuse.
Well, that is even worse, because essentially, you just took the choice away from player.
I can’t help but feel that you didn’t really bother to think this response through. Taken literally, you’ve just asserted that a surprising reward with character synergy is worse than a surprising rigid reward that makes the player feel regret. You assert that this is so because choice was taken away from the player even though neither situation involves player choice.
I get that yout design principle is to give the player choice and the ability to plan. So what is the right way to give “good news” to the player with the most hedonic impact?
The emphasis on Bayesian probability is because it is the simplest way to extend classical logic to propositions with varying degrees of plausibility. Just as all classical logic can be reduced to repeated applications of modus ponens, all manipulations of plausibility can be reduced to applications of Bayes’ Theorem (assuming you want results that will line up with classical logic as the plausibilities approach TRUE and FALSE).
If some or all abilities are hidden at the beginning, that forces the player to choose based on incomplete knowledge, and more often that not, leads to regrets: “I wish I purchased that ability which turned out to work in nice synergy with others, and not this one which turned out to be useless..”. Especially if there’s some finite pool of resources used to purchase these abilities.
And that is not fun, even if surpising.
This seems to miss the point—you’re talking about a surprise that isn’t a pleasant surprise. Suppose the game was designed so that after achieving a goal, you get an unexpected bonus ability with awesome synergy with the character, no matter how the character had been developed up to that point? As a game designer, ignoring the difficulty of realizing such a design, how would you say the Fun-theoretic potential of this scenario stacks up?
A rule of thumb in game design is to never make players make uninformed choices, as that only leads to frustration. This beats any possible pleasant surpise that might be there.
This rule of thumb is overly broad as stated. It would rule out poker, “fog of war” in RTS games, etc.
Utopia originally meant no-place, I have a hard time forgetting that meaning when people talk about them.
The term “utopia” was a deliberate pun on “outopia” meaning “no place” and “eutopia” meaning “good place”. It seems doubtful that Thomas More actually intended to depict his personal ideal society, so one might say that Utopia is the original Weirdtopia.
I think we’re looking at premature search-halts here.
I plead no contest.
Economic Weirdtopia: FAIth determines that the love of money actually is the root of ~75% of evil, so it’s back to the barter system for us.
Sexual Weirdtopia: FAIth determines that the separatist feminists were right—CEV requires segregation by sex. Homosexual men and lesbians laugh and laugh. Research on immersive VR becomes a preoccupation among the heterosexual majority in both segregated camps.
Not very plausible, but… “That’s the thing about FAIth. If you don’t have it, you can’t understand it. And if you do, no explanation is necessary.”
I don’t yet see quantifiable arguments why from-scratch AI is easier [than human augmentation].
From-scratch AI could also be justified as yielding greater benefits even if it as difficult (or more difficult) than human augmentation.
Cyan, is that a standard hypothesis? I’m not sure how “practice” would account for a very gregarious child lacking an ordinary fear of strangers.
I don’t know if it’s a standard hypothesis—it’s just floating there in my brain as background knowledge sans citation. It’s possible that read it in a popular science book on neuroplasticity. I’d agree that “practice” doesn’t plausibly account for the lack of ordinary fear; it’s intended as an explanation for the augmentations, not the deficits.
Nitpick for Doug S.: that’s actually two coupled evolutionary limits. Babies’ heads need to fit through the women’s pelvises, which also have to be narrow enough for useful locomotion.
Deacon makes a case for some Williams Syndrome symptoms coming from a frontal cortex that is relatively too large for a human, with the result that prefrontal signals—including certain social emotions—dominate more than they should.
Having not read the book, I don’t know if Deacon deals with any alternative hypotheses, but one alternative I know of is the idea that WSers get augmented verbal and social skills is because it is the only cognitive skill they are able to practice. In short, WSers are (postulated to be) geniuses at social interaction because of practice, not because of brain signal imbalance. This is analogous to the augmented leg and foot dexterity of people lacking arms.
How could we test these alternatives? I seem to recall that research has been done in the temporary suppression of brain activity using EM fields (carefully, one would hope). If I haven’t misremembered, then effects of the brain signal imbalance might be subject to experimental investigation.
TGGP, I think it’s supposed to. The General is quoted in the linked article.
Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization...?
If you invoke the unlimited power to create a quadrillion people, then why not a quadrillion?
One of these things is much like the other...