Thanks Matthew. Per your suggestion I just added Searle’s opinion on Zombies. Let me know if you have any difficulties using the website (feel free to email me at ben@takeonit.com ).
BenAlbahari
I’d like to explain more about the motivation behind TakeOnIt. The ultimate goal is to be able to predict peoples’ opinions. It started with the ordinary observation that during a discussion with someone, you can rapidly form a picture of their world view. Specifically, the more opinions that a person divulges to you, the more accurately you can predict all the other opinions of that person. It then occurred to me—could a computer predict many opinions a person has based on a small subset of their opinions?
While we don’t like to be “put in a box”, the statistical reality is that many of our opinions are a predictable function of other opinions that we have. For example, if someone has the opinion that the Theory of Evolution is false, we can predict that they are far more likely to believe in God, and more likely to be in favor of banning abortion. If someone believes in homeopathy, they are far more likely to believe in a host of other alternative medicines, and even more generally, less likely to have opinions of a scientific nature.
With this in mind, let’s turn to a common problem: we want to form an opinion on a topic outside of our domain expertise. Consider how we form an opinion on Global Warming. We might attempt to familiarize ourselves with the facts and arguments, but it’s terribly time-inefficient, and is akin to becoming a doctor to fix one’s own medical conditions. So instead we outsource our opinion: we will believe what the experts tell us. But which ones? There are respectable experts on both sides of the debate. Now, there are many more climatologists who believe Global Warming is caused by humans, but why trust the consensus? Let’s imagine that your opinions on a wide range of issues resonated very well with climatologists with the minority opinion, and conflicted badly with climatologists with the majority opinion. Who would you believe? Of course you’d side with the minority. We trust the opinions of others whose opinions overlap with our own. In the case that we trust our own rationality this is the rational thing to do.
With respect to Global Warming, we will believe in the experts who have opinions that most overlap with our opinions. Reciprocally, we would expect such experts to believe us, in a domain that they knew little about but where we were the experts.
Here’s a specific example in the Global Warming debate. Roy Spencer (see http://www.takeonit.com/expert/238.aspx on TakeOnIt), a leading skeptical climatologist:
1) does not believe humans cause global warming
2) does not believe in evolution
3) does believe in the cosmological argumentThe fact that I disagree with him on ‘2’ and ‘3’, where I have a reasonable understanding of the issues, make me less likely to trust his opinion on ‘1’, where I have a poorer understanding of the issue. This however is just one tiny example. The purpose of creating a database of opinions is ultimately to elevate this process from an anecdotal one to a statistical one. I want a system that can predict what I should believe, given what I already believe, before I even believe it!
I contacted Eliezer after reading his excellent post on the Correct Contrarian Cluster and realizing we were looking at a very similar problem.
- 4 Apr 2010 4:41 UTC; 8 points) 's comment on Bayesian Collaborative Filtering by (
- 9 May 2010 14:40 UTC; 7 points) 's comment on What is bunk? by (
- 18 Feb 2011 15:17 UTC; 4 points) 's comment on Scholarship and DIY Science by (
Debate mapping is part of TakeOnIt, a publicly editable database of expert opinions introduced in a previous post ( http://lesswrong.com/lw/1kl/takeonit_database_of_expert_opinions/ ). It’s deliberately very simple. Here’s how it works:
1) Every debate is expressed as a yes-no question.
2) Every yes-no question has experts on both sides of the debate.
3) Every debate can link to a sub-debate (recursively).A “simple debate” is one where ‘1’ and ‘2’ are sufficient. You can determine who is right in a simple debate by judging which experts have the best arguments, the best credentials, or the best track records. A good example is the vaccine debate, here:
http://www.takeonit.com/question/291.aspx
A “complex debate” is one which also requires ‘3’. This occurs when a simple debate is not sufficient to judge correctness, because the expert arguments, credentials, and track records, seem sufficiently reasonable on both sides of the debate. By recursively splitting a debate into sub-debates, a complex debate simply becomes a hierarchically structured set of simple debates. A good example is the Global Warming debate, here:
http://www.takeonit.com/question/5.aspx
The truth lies in the sub-debates. For example, in the Global Warming debate, there’s a sub-debate as to whether cosmic radiation significantly affects earth’s climate ( http://www.takeonit.com/question/74.aspx ). We have the top skeptic S. Fred Singer, head of the NIPCC (Non Governmental Panel on Climate Change), claiming that empirical evidence strongly supports his view. Yet we have a very respectable peer-reviewed paper contradicting him. In this particular case the skeptical side is significantly undermined. This process, of examining the likelihood of truth in simple sub-debates, is in my opinion, the key to finding the truth in a complex debate.
I’ve been struggling a little with the visualization and editing UI for the debate/argument maps. I feel like I’ve managed to take a nice simple concept and then totally undermine it with a confusing UI. I think I’ve been looking at it too long. I greatly welcome any feedback.
P.S. Details on creating sub-debates: This works by linking two yes-no questions together via a “logical implication”. For two questions, A and B, you can express A → B. You can also use negation, to yield the combinations: A → B, A → ~B, ~A → B, ~A → ~B. Finally, you can use the modal logic qualifiers, “possibly” A → B vs. “necessarily” A → B. It’s explained in more detail in the Implications section of the FAQ, here: http://www.takeonit.com/help.aspx
Perhaps—let me know if I’m wrong—TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database:
1) An implication between the question “Is cryonics worthwhile?” and “Could a computer ever be conscious?”.
2) Bryan Caplan’s opinion on whether a computer could be conscious.This new data now shows up in the cryonics question: http://www.takeonit.com/question/318.aspx
The cryonics question is now more “fine grained” / less “course grained”. Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I’m wrong—that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer’s comment here:
“I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff.”
Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
The functionality is already there… Bryan’s position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea?
Add the following logical implication to the TakeOnIt database (done):
“If ~p → possibly ~q” where p=”Could a computer ever be conscious?” and q=”Is cryonics worthwhile?”
Point taken. I removed the implication to question “p” per your suggestion and added implications from question q (q=”Is cryonics worthwhile?”) to the questions:
a) “Is information-theoretic death the most real interpretation of death?”
b) “Is cryonic restoration technically feasible in the future?”
c) “Is there life after death?”where the implications are:
a → possibly q
~b → necessarily q
c → necessarily ~q( See the result here: http://www.takeonit.com/question/318.aspx )
Yes, it should have been ~b → necessarily ~q.
LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who’s confident that they’re going to hell so tries to postpone eternal suffering with cryonics.
Seriously however, you’re right. Here’s another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed “c → necessarily ~q” to “c → possibly ~q”. I can’t change the wording of the question “Is there life after death” because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I’d considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn’t seem worth it.
Wasn’t that Paris Hilton? ;)
I’ve added most of your sources to the TakeOnIt wiki debate:
“Is cryonics worthwhile?”
http://www.takeonit.com/question/318.aspxThe cryonics debate now has four sub debates:
Is information-theoretic death the most real interpretation of death?
Is cryonic restoration technically feasible in the future?
Is living forever or having a greatly extended lifespan desirable?
Is there life after death?
Am I missing any major sub-debate?
OK, per your suggestion I added the question: “Do the best currently available cryonic techniques cause information-theoretic death?”. I can’t actually find any expert who answers yes to this question. Any pointers?
P.S. I actually think the Caplan and Stark arguments reasonably reflect the mainstream objections to cryonics. However, if you know of better critics, please suggest some.
Nice link—I added Alcor’s side of the story to the question “Is cryonics worthwhile?”:
http://www.takeonit.com/question/318.aspx
Thanks to everyone’s suggestions, there’s now 5 sub-debates for cryonics:
Is information-theoretic death the most real interpretation of death?
Is cryonic restoration technically feasible in the future?
Is living forever or having a greatly extended lifespan desirable?
Is there life after death?
Does cryonic preservation with today’s best technology cause information-theoretic death?
make this dataset formal!
Hence TakeOnIt, a database of expert opinions. Over the last few hours I’ve peen entering in all the expert opinions on cryonics that people have been posting links to:
Cryonics debate: http://www.takeonit.com/question/318.aspx
FYI—Robin Hanson’s opinions on TakeOnIt: http://www.takeonit.com/expert/656.aspx
My point is that the same infrastructure can be used to capture any debate, whether its the current cryonics debate or other various debates in the past. The good thing about having a database of expert opinions is it makes questions like the one you asked easier.
I added the sub-debates suggested by Earendil, pdf23ds, and ciphergoth, giving us a total of 7 sub-debates for the cryonics debate:
Is information-theoretic death the most real interpretation of death?
Is cryonic restoration technically feasible in the future?
Is living forever or having a greatly extended lifespan desirable?
Does cryonic preservation with today’s best technology cause irreversible brain damage?
Is there life after death?
Is deterioration of the brain after death slow enough for cryonics to be worthwhile?
Assuming it was technically possible, would a cryonically suspended person actually get reanimated?
http://www.takeonit.com/question/318.aspx
Once again, let me know if there’s a major sub-debate missing. For the sake of cleanliness I’ll attempt to edit this post if someone suggests another sub-debate, rather than adding a new post.
While I loved this essay, I felt uncomfortable with the vagueness with which the group of “AGW Skeptics” was defined. If we define that group loosely to include every AGW skeptic, then there are obviously rationality impoverished reasons AGW skeptics have for their beliefs, but the same is true for AGW believers. Attacking strawmen gets us nowhere.
A worthy attack on AGW skeptics should be directed at the leading skeptics who have expertise in climatology. They are making very specific scientific claims, such as:
Negative feedback loops in the atmosphere will mostly cushion atmospheric CO2 increases.
Fluctuations in cosmic radiation have been the main driver of warming in the 20th century.
These claims—while I think we have good scientific evidence against them—are not obviously unreasonable. What is unreasonable is the insinuation in the essay that skeptics who are professional climatologists deny the claim “we know from physics that [CO2 is] a greenhouse gas”. They don’t—the real issue that the professionals debate is whether the addition of greenhouse gases will cause a positive or negative feedback (without a positive feedback, the warming from increased CO2 levels is tolerable). The answer to that question requires much more subtle reasoning, and even with the aid of numerous state-of-the-art computer models, the variance in projections is still wide enough to warrant caution in our predictions. To liken an AGW skeptic to a creationist is unjustified, and I mean that in the deepest possible way, i.e. I’d be far more comfortable betting my money in a prediction market to support evolution than to support AGW.
Global Warming Debate:
http://www.takeonit.com/question/5.aspx
FWIW I started the “Thorium For Energy” advocacy group on Facebook a while ago. Join it if you can. Most people are simply unaware of this technology.
I also have the thorium energy question on TakeOnIt here: http://www.takeonit.com/question/127.aspx
Not many oppose it simply because not many know it.
Woo!
The term is actually derived from the verb to “woo”.
The definition “A woo is a label for a commonly used argument or strategy to persuade” encompasses any commonly used and persuasive argument, including both valid and invalid arguments, or arguments that may or not be valid depending on how they’re used (such as the Consensus Woo).
I think however an attribute attached to each Woo of its intrinsic validity would be a good idea. That kind of data could then be used to rate experts according to how often they use bad arguments, and hence contribute to the calculation of Eliezer’s Correct Contrarian Cluster.
At one point I considered having any arbitrary tag for a quote. However, this was too open-ended. I thought it made sense to constrain the meaning of the tags to the tactics used to persuade. I then started thinking about categories of such tactics, and realized that the instances of persuasive tactics didn’t neatly neatly fall into categories. I found many tactics weren’t clearly classified as an argument or a rhetorical device, but somewhere in-between. Furthermore, I realized: what value is there in even deliberating over that choice? It seemed sufficient to simply have a term that captured the general case: a persuasive tactic. Now, I could have chosen the term “argument” but then some people will complain that they’re not all arguments. That’s how the new term came about.
Here’s code to compute the probability empirically (I got an answer of 0.2051384 for 10000 draws. It’s written in C# but can be readily converted to other functional languages such as Haskell).
Notice that isPairOfAces is not a function of isRandomAceASpade. In other words, the suit of the random ace doesn’t affect the probability of there being a pair of aces. Computer programs don’t suffer Information Bias. OTOH I do so let me know if I screwed up… ;)
For those interested in the complete working code: