I’ll repeat here, then, the original question(s) which prompted that comment—how careful one should be to avoid generalization from fictional evidence [described as a fallacy here, but I’d interprete it as a bias as well—which raises another potentially interesting question, how much overlap is there between fallacies and bias]? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose “morality programming” breaks down when conditions shift to ones its designer had not thought about? Or would it be better to avoid fictional examples entirely and stick purely to the facts?
Kaj_Sotala(Kaj Sotala)
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false. This would allow the readers to see how much belief the author himself has in his theories.
Of course, if one predicts something to happen a relatively long time from now, this might not work because the deposit effectively feels lost (hyperbolic discounting). For instance, I wrote an essay speculating on true AI within 50 years: regardless of how confident I am of the essay’s premises and logical chains, I wouldn’t deposit any major sums to it, simply because “I’ll get it back in 50 years” is far enough in the future to feel equivalent to “I’ll never get it back”. I have more use for that money now. (Not to mention that inflation would eat pretty heavily on the sum, unless an interest of some sort was paid.)
Were we talking about predictions made for considerably shorter time scales, then deposits would probably work better, but I still have a gut feeling that any deposits made on predictions with a time scale of several years would be much lower than was to be expected from the futurists’ actual certainty of opinion. (Not to mention that the deposits would vary based on the personal income level of each futurist, making accurate comparisons harder.)
Eliezer, good question. Now that I think of it, I realize that my AI article may have been a bit of a bad example to use here—after all, it’s not predicting AI within 50 years as such, but just making the case that the probability for it happening within 50 years is nontrivial. I’m not sure of what the “get the deposit back” condition on such a prediction would be...
...but I digress. To answer your question: IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so. That’d seem to allow direct uploading of minds, which again would help considerably in the study of the underlying principles of intelligence. I tacked 30 years on that to be conservative—I don’t know how long it takes before people learn to really milk those simulations for everything they’re worth, but modern brain imaging techniques were developed about 15 years ago and are slowly starting to produce some pretty impressive results. 30 years seemed like an okay guess, assuming that the two were comparable and that the development of technology would continue to accelerate. (Then there’s nanotech giving enough computing power to run immense evolutionary simulations and other brute-force methods of achieving AI, but I don’t really know enough about that to estimate its impact.)
So basically the 50 years was “projections made by other people estimate really promising stuff within 20 years, then to be conservative I’ll tack on as much extra time as possible without losing the point of the article entirely”. ‘Within 50 years or so’ seemed to still put AI within the lifetimes of enough people (or their children) that it might convince them to give the issue some thought.
Voting and other things where the behavior of a single individual has relatively little effect. “A single vote does not matter either way, therefore I shall not waste my time voting” versus “I shall vote, for if everybody thought their votes did not matter we’d be screwed”. Or alternatively the same lines of thought when it comes to boycotting big corporations, or donating small sums to charities, et cetera. Which way of thinking is more rational?
it does not follow that Santa-ism is the best of all possible alternatives. Other policies could also supply children with a sense of wonder, such as taking them to watch a Space Shuttle launch or supplying them with science fiction novels.
This strikes me as slightly fallacious reasoning, since it’s implying that supplying children with science fiction novels and telling them about Santa Claus are mutually exclusive options. If one only wanted to inspire a sense of wonder in children, would the best option not be to tell them about Santa Claus and take them to watch a Space Shuttle launch and supply them with science fiction novels?
I understand your message, but I think Santa Claus is a bad example to illustrate it with. The “but there is a third choice as well” argument only applies if we’re talking about an either-or situation, but in this case, your suggested third choices can be just piled on top of the original one.
(If we wanted to attack the Santa Claus argument in particular, it could be pointed out that by the same logic, children should be told presented countless of fairy stories as true ones, up to the point that they’d start getting seriously confused about how the world really works and how it doesn’t.)
Gah. And somehow I managed to miss that Jeremy had posted a comment with the essentially same content just before me. Ignore me, folks...
I know that if I’m at a party (of most types), for example, my first goal ain’t exactly to win philosophical arguments …
Funny, I’ve always thought that debates are one of the most entertaining forms of social interaction available. Parties with a lot of strangers around are one of the best environments for them—not only don’t you know in advance the opinions of the others, making the discussions more interesting, but you’ll get to know them on a deeper level, and faster, than you could with idle small talk. You’ll get to know how they think.
We may not have rationality dojos, but in-person debating is as good an irrationality dojo as you’re going to get. In debating, you’re rewarded for ‘winning’, regardless of whether what you said was true
Only if you choose to approach it that way.
We can hit Explain for the Big Bang, and wait while science grinds through its process, and maybe someday it will return a perfectly good explanation. But then that will just bring up another dialog box.
I was reminded of “can the second law of thermodynamics be reversed?”, here.
I think the “would the world have been destroyed” comments are addressed pretty well by this bit from :
“Given the likely scale and effects of a nuclear attack, it’s most unlikely that the everybody will be killed. There will be survivors and they will rebuild a society but it will have nothing in common with what was there before. So, to all intents and purposes, once a society initiates a nuclear exchange it’s gone forever.”
Gah. Messed up my previous comment (next time, I’ll use preview). It should have read “this bit from Nuclear Warfare 101”.
Ev-psych seems to get advertised a lot around here, so it might be good to add David Buller’s Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature, a criticism of evolutionary psychology, to the reading list. (After that, of course, do also read Debunking Adapting Minds, the criticism of the criticism.)
What about statements that are so loaded to their listeners that they’re rejected outright, with seemingly no consideration? Are they subject to the same process (and have such outrageous implications that they’re rejected at once), or do they work differently?
There was an interesting exercise for overcoming writer’s block somewhere, which said to pick a word, any word at random. After you did that, you were told to write a sentence which included that word. After that, a paragraph which included that sentence.
It felt surprisingly effective.
So, since the topic came up, I’ll repeat the question I posed back in the “suggested posts” thread, but didn’t (at least to my notice) receive any reply to:
How careful one should be to avoid generalization from fictional evidence? When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose “morality programming” breaks down when conditions shift to ones its designer had not thought about (not in a “see, it’s happened before” sense but in a “here’s one way of how it could happen”)? Or would it be better to avoid fictional examples entirely and stick purely to the facts?
Uhm, cosmic rays a threat to cryonics? Where the heck did /that/ come from?
Fascinating question. No matter how small the negative utility in the dust speck, multiplying it with a number such as 3^^^3 will make it way worse than torture. Yet I find the obvious answer to be the dust speck one, for reasons similar to what others have pointed out—the negative utility rounds down to zero.
But that doesn’t really solve the problem, for what if the harm in question was slightly larger? At what point does it cease rounding down? I have no meaningful criteria to give for that one. Obviously there must be a point where it does cease doing so, for it certainly is much better to torture one person for 50 years than 3^^^3 people for 49 years.
It is quite counterintuitive, but I suppose I should choose the torture option. My other alternatives would be to reject utilitarianism (but I have no better substitutes for it) or to modify my ethical system so that it solves this problem, but I currently cannot come up with an unproblematic way of doing so.
Still, I can’t quite bring myself to do so. I choose specks, and admit that my ethical system is not consistent yet. (Not that it would be a surprise—I’ve noticed that all my attempts at building entirely consistent ethical systems tend to cause unwanted results at one point or the other.)
For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?A single penny to avoid one dust speck, or to avoid 3^^^3 dust specks? No to the first one. To the second one, depends on how often they occured—if I somehow could live for 3^^^3 years, getting one dust speck in my eye per year, then no. If they actually inconvenienced me, then yes—a penny is just a penny.
If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person
I find this reasoning problematic, because in the dust specks there is effectively nothing to acclimate to… the amount of inconvenience to the individual will always be smaller in the speck scenario (excluding secondary effects, such as the individual being distracted and ending up in a car crash, of course).
Which exact person in the chain should first refuse?
Now, this is considerably better reasoning—however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded “you among many have to make the following choice...”, I could agree with you. But the current wording implied that it was once-a-universe sort of choice.
(Apologies in advance to the sort-of-off-topic nature of this comment. As you’ll see shortly, I had little choice.)
I was wondering, is there an avenue for us non-contributor readers to raise questions we think would be interesting to discuss? As far as I know, there are no public overcoming bias forums or mailing lists where everybody can post. One could ask questions in the comment sections in this blog, but that would be hijacking the commentaries to subjects other than what was actually said in the post—and I believe I’ve already seen at least one admonishment for a commenter to stick to the topic. Is it best to just post a question in the comments anyway, and trust for one of the regular contributors to make a real post about it if it’s deemed interesting enough?
(As for the specific question I had in mind—I was wondering how careful one should be to avoid generalization from fictional evidence [described as a fallacy here, but I’d interprete it as a bias as well—which raises another potentially interesting question, how much overlap is there between fallacies and bias?]. When writing about artificial intelligence, for instance, would it be acceptable to mention Metamorphosis of Prime Intellect as a fictional example of an AI whose “morality programming” breaks down when conditions shift to ones its designer had not thought about? Or would it be better to avoid fictional examples entirely and stick purely to the facts?)