Any company who think they are hiring the top 1% are deluded or maybe microsoft. Any company that actually try and really hire the top 1% are heading for swift bankruptcy—you can’t afford to take the time and effort to get “the best” of anything in business; you have to go with something that’s good and fast.
But people do give up after a while if they can’t get anywhere (especially in academia). Maybe you can help overcome the bias if there is some way of measuring how often people have applied in the past?
I believe that there was something about a similar approach in a paper “Risk at a Turning Point?” by Andrew Stirling. He argued that analysis of risk should group all the risks as a vector valued quantity, rather than a scalar. That should be just a valid in this more general context: risks, costs and opportunities of a particular scenario can then be represented on a big vector, and each interest group applies their own method to bring it down to a scalar value (or probablility distribution) along the “support/oppose” continuum.
Andrew was focusing on the fact that generally the one to do the estimate was a government or a corporation that would apply their own method to get from the vector to the scalar, and only the scalar was announced. If the full vector was announced, however, it was easier for groups with different values to come up with their own estimate of the scalar “support/oppose” distribution. As well, they could easily add extra elements to the vector (things like “the project is an eyesore”), and see how that changed their estimate, rather than adding it as an extra and having those fruitless “the project is an eyesore” vs “yes, but it’ll bring in cash” debates.
The vector could be what little ol’ dame rationality writes down in her notebook.
this may be the result of a useful heuristic
Another heuristic may be our habit of expecting some sources—say, newspapers—to present the arguments pro and agaisnt the issue (“this will clean up the beach, but costs money”). If they say “this will produce more waste” and leave it at that, we may assume that’s the only way the reactor is different.
Hum… We can’t present Archimedes with the scientific method, but we could present him with simple experiments we did to confirm and infirm some of our simpler beliefs. That might be enough to give him a hint of the method...
Thanks Eliezer, for this example! Even though it’s been making my head spin—there seem to be ways of cheating the chronophone (such as presenting the scientific method through specific experiments), but they don’t seem “fair” to your project—deciding what is best for us today, if we don’t know the destination.
Since what we’d most want to do is to get Archimedes to doubt some of his assumptions, maybe what we’d need most today is doubt?
But the one thing I’d most like to tell him is “watch out for that roman soldier!”
Finney, I do indeed think there’s a conflict between tsuyoku naritai and majoritarianism.
I don’t think that’s automatic. If you do truly believe that the mean opinion is more reliable in general than any you could construct on your own, then moving towards that mean is something that makes you better. And if you just take majoritism as a guide, rather than a dogma, there’s even less problems.
The fact that if everyone did this, it would be a disaster may be an example of what I called moral freeloading—something that may be good for an individual to do, alone, but that would be very dangerous for everyone to imitate.
Ah, Finney made the same point I was making, and cunningly posted it first… ^_^
One small point: If we truly want to become stronger, then we should always test our abilities against reality—we should go out on a limb and make specific predictions and then see how they pan out, rather than retreating into the “it’s complicated, so let’s just conclude that we’re not qualified to decide”. That’s an error I’ve often sliped into, in fact...
As for public proof
Should be want a public proof? Would that not attract lots of people who are more interested in signaling than actually overcoming bias?
People should be aware of the advantages that de-biasing can bring, but we should let them know of it—quietly.
I’ve been thinking about this problem a bit. I think that every futurist paper should include a section where it lists, clearly, exactly what counts as a failure for this prediction. In fact, that would be the most important piece of the paper to read, and those with the most stringent (and short term) criteria for failure should be rewarded.
And, in every new paper, the author should list past failure, along with a brief sketch of why the errors of the past no longer apply here. This is for the authors themselves as much as for the readers—they need to improve and calibrate their predictions. Maybe we could insist that new papers on a certain subject are not allowed unless past errors in that subject are addressed?
Of course, to make this all honest and ensure that errors aren’t concealed or minimized, we should ensure that people are never punished for past errors, only for a failure to improve.
Now, if only we could extend such a system to journalists as well… :-)
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false.
That I strongly disagree with. We don’t want to discourage people from taking risks, we want them to improve with time. If there’s money involved, then people will be far shyer about the rigour of the “failure section”.
Ideally, we want people to take the most pride in saying “I was wrong before, now I’m better.”
Stuart, and Ilkka, how about you guys go first, with your next paper? It is easy to say what other people should do in their papers.
Alas, not much call for that in mathematics—the failure section would be two lines: “if I made a math mistake in this paper, my results are wrong. If not, then not.”
However, I am planning to write other papers where this would be relevant (next year, or even this one, hopefully). And I solemly swear in the sight of Blog and in the presence of this blogregation, that when I do so, I will include a failure section.
And the people here are invited to brutally skewer or mock me if I don’t do so.
Fine print at the end of the contract: Joint papers with others are excluded if my co-writer really objects.
IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so.
Then you could make an interim prediction on the speed of these developments. If IBM are predicting a simulation of the human brain in 10-15 years, what would have to be true in 5 years if this is on track?
Same thing for nanotechnology—if those projections are right, what sort of capacities would we have in 10 years time?
But I completely agree with you about the unwisdom of using cash to back up these predictions. Since futurology speculations are more likely to be wrong than correct (because prediction is so hard, especially about the future) improving people’s prediction skills is much more usefull than punishing failure.
Actually, the failure section would be: “If my results are wrong, I made a math mistake in this paper. If I made no mistake in this paper, my results are correct.”
Indeed! :-) But I was taking “my results” to mean “the claim that I have proved the results of this paper.” Mea Culpa—very sloppy use of language.
that some properties by their very nature must take determinate values or not exist at all
This is not a scientific principle. Science lives or dies only on the accuracy of its predictions—probabilistic or deterministic. Don’t be confused by the fact that pre-quantum, pre-thermodynamics laws were deterministic—that was just a lucky fact, that persuaded people that all laws had to be the same.
As for the “some properties”, quantum mechanics asserts (and experiments back it up) that there is no such thing as position, or momentum—that the combination of the two is the actual property that exists.
As for the different interpretations of quantum mechanics—they’re all equivalent, or they differ in ways we can’t measure yet. So none of them on their own say anything about how we should view reality. Only the predictions of quantum mechanics tell us about reality, not the models.
Actually, I don’t think I agree with the thrust of this post. As long as you don’t argue “this is weird, hence it is wrong”, I think the if you find quantum mechanics strange you’re more likely to prosper in the field that if you force your sense of normality to match quantum reality.
In the first case, you can easily discover a new physical law, find it weird, and cheerfully accept it. In the second case, a new law may be an assault on your feeling of reality, so you may be less willing to accept it—and if you did, you’d have to go through the whole process of updating your instincts again.
People can develop very good intuition about things they find strange, without having to find them any less strange.
as if it made sense to say of a particle that it has a position, but no particular position
That might or might not make sense (mathematicians have been tearing their hair out about non-computable numbers, see Chaitin’s constant). But most quantum mechanists do not say that a particle has a position. In fact if you interpret Quantum mechanics in terms of “hidden variables” (there are underlying values for the objects, like spin and momentum, but we can’t get at them) then you will generally come unstuck.
Can you explain to me the exact nature of this ‘combination’ that is the actual property?
The property is exactly the one in the quantum formalism. I don’t really see why you object to the formalism. It gives specific predictions that have been confirmed, with high probability, in experiments.
If you want an ontological view, then my position is that science is only about making predictions about the results of experiments and then testing them. Properties such as position, energy, etc… are only valid in that they predict a lot of different experiments. In classical mechanics, it emerged that a mathematical concept called “position” led to great predictive power, giving universal laws. So classically, “position” existed.
In quantum mechanics, laws based on “position” don’t work, so the concept of position doesn’t make sense in a quantum framework (just as “colour” makes no sense in acoustics). Other concepts did make sense—they had to be expressed in certain formal mathematical ways, but they made sense.
So, to sum up, position doesn’t exist, momentum doesn’t exist, but certain other objects (such as the product of the uncertainties of momentum and position) do make sense.
Aha! But have I not defined “uncertainty of position”? How can I claim this exists if position doesn’t?
The problem is just the name (and this is going back to Elizer’s original point, and causing me to think I may have been a bit hasty in rejecting it). This is just the standard deviation of an observable. It’s only called “uncertainty of position” because of an analogy with the classical “position”—a wrong analogy (and an observable, like a classical “position”, is just a mathematical object that seems to make sense in experiments).
If a theory seems bizarre to your intuitions, then either the theory is wrong or your intuitions need reshaping.
I’m leaning towards embracing your point more, but still two issues:
1) “need”. If my intuition tell me something, but I know it’s wrong, and I can deal with it without letting my intuition interfere, why do I need to reshape my intuition—shouldn’t I just go with “don’t trust my intuition”?
2) As a mathematician, I have good mathematical intuition. It helped me when I took a course on quantum mechanics and relativity. However, the QM results offended my everyday intuition, and still do. However, I could still develop QM results, based on my mathematical intuition and knowledge (and I’d get them right). If I considered the world of QM as a non-existent mathematical fiction, I could still can work in it. So why do I need to make my everyday intuition match QM reality? What do I gain?
we shouldn’t get all indignant at reality for surprising us
A feeling I entirely agree with. Reality is out there, and finding a way of dealing with it is essential—whether though updated intuition, or conscious reasoning.
Never use quantum mechanics as an example of anything.
I’d heard that saying before, but never truly realised why until your post...
As you point out, the afterlife noble lie is dominated by other noble lies. And I think a lot of the attraction of it is precicely that it’s so unlikely. I’ve seen people who claim to believe in an afterlife, and who get very nervous when confronted with cryogenic ideas and such—because when they look at cryogenics, they can see all the ways it can fail.
But their afterlife noble lie is so unlikely, so removed from the everyday, that people somehow feel it can’t fail. That it’s beyond issues of probability and likelyhood.
Why does an omnipotent, omniscient god remain in people’s minds, while the bearded superman on mount olympos with an eye for the ladies is just a footnote of history? It seems that past a certain point, the bigger the Lie, the more likely it is to be believed.
How do people react if told “Here is a fixed amount of cash, that must go to charity. How do you wish it to be spent?”
Might that not distinguish “purchase of moral satisfaction” from “scope neglect”?
Those yellow LiveStrong bracelets are a great example. They’re about $1 or so, and purchasers wear them around all day advertising that they care about cancer. How many of those people would have donated an equivalent amount (just a buck) without the badge of caring they get to wear around?
Actually, in my experience it’s the other way round—people feel they’re doing their bit just by wearing the bracelets, so they’ll pay less for a bracelet than they’d donate anonymously.
But like most anecdotes, that one story doesn’t tell you anything—we need statistics if we want to truly know how people behave.