(Incidentally, I think it very unlikely that incredible stupidity is the explanation. That is not, in this context, to your credit.)
g
Why be so pessimistic? Well, right now this minute the latest article on your blog suggests that if an eminent scientist declares that evolution is a fact then one should consider seriously that “quite possibly” he “may well” be saying it only because he’s a child-molesting predator.
Whether this indicates that you’re incredibly stupid, or that you simply don’t care whether or not what you say is true, or that you think there’s nothing wrong with slandering thousands of people in this way, or that you’re living in some separate reality from the rest of us, doesn’t really matter; it’s already sufficient to make my personal estimate of the probability of any worthwhile interaction well below 1%. (And also to make it very difficult for me to believe that you have any sincere intention of “picking up good ideas from each other”.)
I mostly concur, but I think you can (and commonly do) get some “negative” information before he stops. If CA comes out with a succession of bad arguments, then even before you know “these are all he has” you know “these are the ones he has chosen to present first”.
I know that you know this, because you made a very similar point recently about creationists.
(Of course someone might choose to present their worst arguments first and delay the decent ones until much later. But people usually don’t, which suffices.)
Yeah, but when playing actual Taboo “rational agents should WIN” (Yudkowsky, E.) and therefore favour “nine innings and three outs” over your definition (which would also cover some related-but-different games such as rounders, I think). I suspect something like “Babe Ruth” would in fact lead to a quicker win.
None of which is relevant to your actual point, which I think a very good one. I don’t think the tool is all that nonstandard; e.g., it’s closely related to the positivist/verificationist idea that a statement has meaning only if it can be paraphrased in terms of directly (ha!) observable stuff.
How do you know QED didn’t teach you a single bit of physics?
If you assimilated the corresponding bits of the Feynman lectures (or any other physics you encountered along the way) at all more easily for having read QED at age 9, then it did teach you some physics, albeit in a sense hard to quantify.
If reading its hand-waving stuff about light taking all possible paths at once increased the probability you’d have assigned to (say) something like the Bohm-Aharonov effect if anyone had thought to ask you how likely you thought it, then it did teach you some physics, even in the “Technical” sense. (Whether more or less than one bit depends on how much that probability increased.)
If having notions like path integrals, phase, and stationary action waved at you unintimidatingly didn’t push your thinking about physics in the direction of clearer understanding, then it seems that you were either (1) already implausibly acquainted with them for even an extraordinarily bright 9-year-old, or (2) implausibly impervious to such things for someone capable of reading and enjoying QED. Of course, something could be implausible to me but still true.
No, Jacob, you aren’t merely saying that and it’s transparently obvious that you aren’t merely saying that. To think that a good way of merely saying that is to do as you did—or even to think that any sane person would believe you when you claim it -- would require that incredible stupidity I mentioned, and I don’t think you’re actually incredibly stupid.
Anyway, I shall now take Eliezer’s advice and stop attempting to discuss things with you here. If I’d noticed that an anonymous commenter here had already drawn attention to your odious comments about scientists and believers in evolution, I wouldn’t have bothered in the first place. My apologies to readers here for wasting bandwidth.
Has it been established that people who prefer “98 is approximately 100” to “100 is approximately 98″ or “Mexico is like the US” to “the US is like Mexico” do so because, e.g., they think 98 is nearer to 100 than vice versa? It seems to me that “approximately 100” and “like the US” have an obvious advantage over “approximately 98” and “like Mexico”: 100 is a nice-round-number, one that people are immediately familiar with the rough size of and that’s easy to calculate with; the US is a nation everyone knows (or thinks they do).
I bet there really is a bias here, but that observation doesn’t strike me as very good evidence for it. The rival explanations are too good. (The example about disease in ducks and robins is much better.)
Bo, the point is that what’s most difficult in these cases isn’t the thing that the 10-year-old can do intuitively (namely, evaluating whether a belief is credible, in the absence of strong prejudices about it) but something quite different: noticing the warning signs of those strong prejudices and then getting rid of them or getting past them. 10-year-olds aren’t specially good at that. Most 10-year-olds who believe silly things turn into 11-year-olds who believe the same silly things.
Eliezer talks about allocating “some uninterrupted hours”, but for me a proper Crisis of Faith takes longer than that, by orders of magnitude. If I’ve got some idea deeply embedded in my psyche but am now seriously doubting it (or at least considering the possibility of seriously doubting it), then either it’s right after all (in which case I shouldn’t change my mind in a hurry) or I’ve demonstrated my ability to be very badly wrong about it despite thinking about it a lot. In either case, I need to be very thorough about rethinking it, both because that way I may be less likely to get it wrong and because that way I’m less likely to spend the rest of my life worrying that I missed something important.
Yes, of course, a perfect reasoner would be able to sit down and go through all the key points quickly and methodically, and wouldn’t take months to do it. (Unless there were a big pile of empirical evidence that needed gathering.) But if you find yourself needing a Crisis of Faith, then ipso facto you aren’t a perfect reasoner on the topic in question.
Wherefore, I at least don’t have the time to stage a Crisis of Faith about every deeply held belief that shows signs of meriting one.
I think there would be value in some OB posts about resource allocation: deciding which biases to attack first, how much effort to put into updating which beliefs, how to prioritize evidence-gathering versus theorizing, and so on and so forth. (We can’t Make An Extraordinary Effort every single time.) It’s a very important aspect of practical rationality.
I’d be happy to buy lots of lottery tickets that had a 1⁄132 chance of winning, given the typical payoff structure of lotteries of the kind you describe.
To act rationally, it isn’t enough to arrive at the correct (probabilities of) beliefs; to act on a belief, the degree of belief you need in it might not be very great.
Given the strong tendency to collapse all degrees of belief into a two-point scale (yea or nay) , I suspect that our intuitions about how much one has to believe in something in order to act accordingly are often too stringent, since the actual strengths of our beliefs are so often much too large.
(Note: “often” doesn’t mean “always” or even “usually”.)
Only because you think of Japanese schoolgirls and tentacle monsters once a minute.
Ouch, don’t the units in that diagram hurt your brain? (Yeah, I understand what it means and it does make sense, but it looks soooo wrong. Especially in my part of the world where an ounce is a unit of mass or weight, not of volume.)
To put it differently, motivated stopping is a problem in pi tests just like it is in psi tests. :-)
Longer version of PdB’s answer: “because of the curvature of space” isn’t “just a curiosity stopper” if you can actually say what that means, do the mathematics, and see how that leads to the phenomenon of gravitation. Of course when you do this you encounter other more fundamental things that you haven’t explained yet. (See Eliezer’s “Explain/Worship/Ignore” piece here some time back.) This is only curiosity-stopping if you then say “no point ever trying to go any deeper than this”.
If the fact that there are not-yet-explained things underlying the curvature of space and how it produces gravity makes it improper to say “we do know how gravity works”, then I think similar facts make it improper to say “we know how a windmill works” or “we know how a seesaw works”, quod est absurdum.
Scientific explanations replace mysteries with smaller mysteries. You can call that “taking mystery out of the world” if you want to, but regarding that as a criticism is just preferring ignorance and stupidity over knowledge and understanding. If science took the wonder or the curiosity out of the world, that would be a criticism worth making, but oddly enough it’s a criticism only ever made by people who don’t know much science.
All of which seems to me to be merely repeating things Eliezer said—and that were common knowledge before Eliezer said them, too—so, bjk and/or Constant, maybe I’m misunderstanding you?
Incidentally: whether something “seems old-fashioned” has very little to do with whether it’s true.
PK, I thought Eliezer’s post made at least one point pretty well: If you disagree with some position held by otherwise credible people, try to understand it from their perspective by presenting it as favourably as you can. His worked example of capitalism might be helpful to people who are otherwise inclined to think that unrestrained capitalism is obviously bad and that those who advocate it do so only because they want to advance their own interests at the expense of others less fortunate.
I agree that he’s probably violating his own advice when he implies that capitalism amounts to treating “finance as … an ultimate end”.
My favourite example of motivated stopping is Lazzarini’s experimental “verification” of the Buffon needle formula.
(Drop toothpicks at random on a plane ruled with evenly spaced parallel lines. The average number of line-crossings per toothpick is related to pi. Lazzarini did the experiment and got pi to 6 decimal places. It seems clear that he did this by doing trials in batches whose size made it likely that he’d get an estimate equivalent to pi = 355⁄113, which happens to be very close, and then did one batch at a time until he happened to hit it on the nose.
Completely off-topic, here’s a beautiful derivation of the formula: Expectations are additive, so the expected number of line-crossings is proportional to the length of the toothpick and doesn’t depend on what shape it actually is. So consider a circular “toothpick” whose diameter equals the spacing between the lines. No matter how you drop this, you get 2 crossings. Therefore the constant of proportionality is 2/pi. Therefore the expected number of crossings for any toothpick of length L, in units where the line-spacing is 1, is 2L/pi. If L<1 then this is also the probability of getting a crossing at all, since you can’t get more than one.)
James, in regard to your last paragraph: I very much doubt whether your decision not to vote is itself a good one, by the standards you’ve just espoused. After all, if you don’t have enough information to decide between voting for X and voting for Y, how can you have enough information to decide between voting for X and voting for no one? Seems to me that you have to make a decision (which might end up being the decision to cast no vote, of course) and the fact that you don’t have enough evidence to be strongly convinced that your decision is best doesn’t relieve you of the responsibility for making it.
So, the randomized algorithm isn’t really better than the unrandomized one because getting a bad result from the unrandomized one is only going to happen when your environment maliciously hands you a problem whose features match up just wrong with the non-random choices you make, so all you need to do is to make those choices in a way that’s tremendously unlikely to match up just wrong with anything the environment hands you because it doesn’t have the same sorts of pattern in it that the environment might inflict on you.
Except that the definition of “random”, in practice, is something very like “generally lacking the sorts of patterns that the environment might inflict on you”. When people implement “randomized” algorithms, they don’t generally do it by introducing some quantum noise source into their system (unless there’s a real adversary, as in cryptography), they do it with a pseudorandom number generator, which precisely is a deterministic thing designed to produce output that lacks the kinds of patterns we find in the environment.
So it doesn’t seem to me that you’ve offered much argument here against “randomizing” algorithms as generally practised; that is, having them make choices in a way that we confidently expect not to match up pessimally with what the environment throws at us.
Or, less verbosely:
Indeed randomness can improve the worst-case scenario, if the worst-case environment is allowed to exploit “deterministic” moves but not “random” ones. What “random” means, in practice, is: the sort of thing that typical environments are not able to exploit. This is not cheating.
You can make positions relative in ways other than using pairwise distances as your coordinates. For instance, just take R^4n (or R^11n or whatever) and quotient by the appropriate group of isometries of R^4 or R^11 or whatever. That way you get a dimension linear in the number of particles. The space might be more complicated topologically, but if you take general relativity seriously then I think you have to be prepared to cope with that anyway.
So, in Eliezer’s example of triangles in 2-space, we start off with R^6; letting E be the group of isometries of R^2 (three-dimensional: two dimensions for translation, one for rotation, and we also have two components because we can either reflect or not), it acts on R^6 by applying each isometry uniformly to three pairs of dimensions; quotienting R^6 by this action of E, you’re left with a 2-dimensional quotient space.
Of course you end up with the same result (up to isomorphism) this way as you would by considering pairwise distances and then noticing that you’re working in a small subset of the O(N^2)-dimensional space defined by distances. But you don’t have to go via the far-too-many-dimensional space to get there.
But … suppose the laws of physics are defined over a quotient space like this. From the anti-epiphenomenal viewpoint, I wonder whether we should consider the quantities in the original un-quotiented space to be “real” or not. Consider quantum-mechanical phase or magnetic vector potential, which aren’t observable (though other things best thought of as quotients of them are). Preferring to see the quotiented things as fundamental seems to me like the same sort of error as Eliezer (I think rightly) accuses single-world-ists of.
But … the space of distance-tuples (appropriately subsetted) and the space of position-tuples (appropriately quotiented) are the same space, as I mentioned earlier. So, how to choose? Simplicity, of course. And, so far as we can currently tell, the laws of physics are simpler when expressed in terms of positions than when expressed in terms of distances. So, for me and pending the discovery of some newer better way of expressing the state space that supports our churning quantum mist, sticking with absolute positions seems better for now.
The answer that’s obvious to me is that my mental moral machinery—both the bit that says “specks of dust in the eye can’t outweigh torture, no matter how many there are” and the bit that says “however small the badness of a thing, enough repetition of it can make it arbitrarily awful” or “maximize expected sum of utilities”—wasn’t designed for questions with numbers like 3^^^3 in. In view of which, I profoundly mistrust any answer I might happen to find “obvious” to the question itself.