just that the parenting styles in the data set did not affect it.
Nitpick: Probably did not affect it differently.
just that the parenting styles in the data set did not affect it.
Nitpick: Probably did not affect it differently.
What you claimed was that “It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered”, and that if someone didn’t want to discuss something then they shouldn’t have brought it up. In context, however, this was a reply to me saying it was probably unkind to belabor a subject to someone who’d expressed that they find the subject upsetting, which you now seem to be saying you agree with. So what are you taking issue with? I certainly didn’t mean to imply that if someone finds a subject uncomfortable to discuss, personally, then that means that others should stop discussing it at all, but this point isn’t raised in your great-grandparent comment, and I hope my meaning was clear from the context.
ETA: I have not voted on your comments here.
I disagree that it is in general unacceptable to post information that you would not like to discuss beyond a certain point.
Without further clarification one could reasonably assume that cousin_it was okay with discussing the subject at one removal, as you suggest, but as it happens several days before the great-grandparent cousin_it explicitly stated that it would be upsetting to discuss this topic.
Missed the point. Do you understand that you shouldn’t have been confident you knew why cousin_it felt a particular way? Beyond that, personally I’m not all that interested in theorizing about the reasons, but if you really want to know you could just ask.
That you may have discovered the reason that you felt this way does not mean that you have discovered the reason another specific person felt a similar way. In fact, they may not even be unaware of the causes of their feelings.
So a math professor is going through the proof of a theorem on the blackboard in front of his class. Partway through, a student stops him to ask about the justification for a particular step. The professor furrows his brow, stares at the chalkboard for a moment, then walks briskly from the room. Twenty minutes later he returns, his chalk worn down to a nub, and announces triumphantly, “it’s obvious”.
Outside of mystic circles, it is fairly uncontroversial that it is in principle possible to construct out of matter an object capable of general intelligence. Proof is left to the reader.
Yes, I am familiar with limits. What I mean is—if you say “f(x) goes to zero as x goes to zero”, then you are implying (in a non-mathematical sense) that we are evaluating f(x) in a region about zero—that is, we are interested in the behavior of f(x) close to x=0.
Edit: More to the point, if I say “g(f(x)) goes to zero as f(x) goes to infinity”, then f(x) better not be (known to be) bounded above.
“Rational” is so frequently used as a contentless word that, if I were to have a comment keyword blacklist, it’d be number two on there, right after “status”, perhaps followed by “downvote me”. Unless you’re talking meta (as in the parent comment), I strongly recommend trying to figure out what you actually mean, and use that word. “Rationality” ain’t the goal.
I don’t get why it makes sense to say
the algorithm did make use of prior knowledge about the envelope distribution. (As the density of the differential of the monotonic function, in the vicinity of the actual envelope contents, goes to zero, the expected benefit of the algorithm over random chance, goes to zero.)
without meaning that the expected density of the differential does go to zero—or perhaps would go to zero barring some particular prior knowledge about the envelope distribution. And that doesn’t sound like “modifying the setup” to me, that seems like it would make the statement irrelevant. What exactly is the “modification”, and what did you decide his statement really means, if you don’t mind?
I don’t know either, but I observe that an upside down W resembles
… an M...?
How did Eliezer determine that the expected benefit of the algorithm over random chance is zero?
So I guess we’re back to square one, then.
Yes, that all looks sensible. The point I’m trying to get at—the one I think Eliezer was gesturing towards—was that for any f and any epsilon, f(x) - f(2x) < epsilon for almost all x, in the formal sense. The next step is less straightforward—does it then follow that, prior to the selection of x, our expectation for getting the right answer is 50%? This seems to be Eliezer’s implication. However, it seems also to rest on an infinite uniform random distribution, which I understand can be problematic. Or have I misunderstood?
I wouldn’t think it’d be all that difficult in absolute terms in this case, but even if it is, then read a bit, ask questions and engage in discussions, and hold off on making speeches until you can reasonably infer you know what’s going on.
That’s definitely cheating! We don’t have access to the means by which X is generated. In the absence of a stated distribution, can we still do better than 50%?
Same way anyone else does? How do you recognize when you know what “ornithopter” means?
As for the deterministic variant, as you’d need some distribution from which the value of x is being selected, I’m not sure how best to calculate the EV of any particular scheme (whereas the nondeterministic algorithm sidesteps this by allowing a calculation of EV after X is selected). With a good prior, yeah, it’d be pretty simple, but without it becomes GAI-complete, yeah?
1⁄4 of the smallest possible amount you could win doesn’t count as a large constant benefit in my view of things, but that’s a bit of a nitpick. In any case, what do you think about the rest of the post?
Why would this be bad? I mean, it’s a pretty big IF, but if tortureworld is actually better, then just imagine a perfect world without torture, and that’s a lower bound on how great tortureworld is.
I don’t buy it! & not only based on personal experience—there’s just too much variation in humanity, and we’re getting pretty good at breaking out of supposed evolutionary imperatives.
I think I’d prefer to live now than in pretty much any prior era.