1) Every debate is expressed as a yes-no question. 2) Every yes-no question has experts on both sides of the debate. 3) Every debate can link to a sub-debate (recursively).
A “simple debate” is one where ‘1’ and ‘2’ are sufficient. You can determine who is right in a simple debate by judging which experts have the best arguments, the best credentials, or the best track records. A good example is the vaccine debate, here:
A “complex debate” is one which also requires ‘3’. This occurs when a simple debate is not sufficient to judge correctness, because the expert arguments, credentials, and track records, seem sufficiently reasonable on both sides of the debate. By recursively splitting a debate into sub-debates, a complex debate simply becomes a hierarchically structured set of simple debates. A good example is the Global Warming debate, here:
The truth lies in the sub-debates. For example, in the Global Warming debate, there’s a sub-debate as to whether cosmic radiation significantly affects earth’s climate ( http://www.takeonit.com/question/74.aspx ). We have the top skeptic S. Fred Singer, head of the NIPCC (Non Governmental Panel on Climate Change), claiming that empirical evidence strongly supports his view. Yet we have a very respectable peer-reviewed paper contradicting him. In this particular case the skeptical side is significantly undermined. This process, of examining the likelihood of truth in simple sub-debates, is in my opinion, the key to finding the truth in a complex debate.
I’ve been struggling a little with the visualization and editing UI for the debate/argument maps. I feel like I’ve managed to take a nice simple concept and then totally undermine it with a confusing UI. I think I’ve been looking at it too long. I greatly welcome any feedback.
P.S. Details on creating sub-debates: This works by linking two yes-no questions together via a “logical implication”. For two questions, A and B, you can express A → B. You can also use negation, to yield the combinations: A → B, A → ~B, ~A → B, ~A → ~B. Finally, you can use the modal logic qualifiers, “possibly” A → B vs. “necessarily” A → B. It’s explained in more detail in the Implications section of the FAQ, here: http://www.takeonit.com/help.aspx
TakeOnIt appears to encourage a “coarser-grained” approach to mapping a debate, compared to what I was trying to do with cryonics and how I ended up doing that in bCisive.
Its mode of operation doesn’t appear suitable for my purposes (improving discussions between people committed to truth-seeking, by exposing which parts of their belief system structures are congruent and which parts conflict; and ultimately, letting myself be convinced by arguments which are actually accurate, not just convincing).
Its raw material isn’t arguments per se, but entire worked-out positions. These worked-out positions are expressed in the usual blend of rhetoric and logic. Take for example this excerpt from the quoted position of Bryan Caplan on the “contra” side of cryonics: “If they had a ghost of a chance of giving me what I want, they wouldn’t need to twist the English language.”
There is an inference there, which a finer-grained tool would let us consider on its own, after rendering into its constituent parts: a) an observation (“cryonics advocates twist the English language”) which may or may not correspond to facts, b) an inference pattern (“people twist language to bolster untenable positions, therefore positions bolstered by twisted language tend to be untenable”) and c) a conclusion (“whatever cryonics advocates claim is an untenable position”).
The issue here is that this sentence is of course not Bryan’s entire reasoning on the matter, it’s only an excerpt from a blog post he wrote which wasn’t even intended as a potentially convincing argument, merely part of his telling a story about meeting Robin Hanson and the two of them discussing cryonics. Bryan’s actual point isn’t the above quoted (and rather low-quality) bit of argumentation, it is the assertion that “uploading doesn’t count as life extension”, and that doesn’t appear in the quote.
So, while TakeOnIt might be a valuable resource for researching on a topic for the purposes of argument mapping, I would not plan to use it for the type of work I had in mind in the top post.
Perhaps—let me know if I’m wrong—TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database:
1) An implication between the question “Is cryonics worthwhile?” and “Could a computer ever be conscious?”. 2) Bryan Caplan’s opinion on whether a computer could be conscious.
The cryonics question is now more “fine grained” / less “course grained”. Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I’m wrong—that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer’s comment here:
“I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff.”
Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
I don’t quite see how it works. Bryan Caplan has some other theory of identity and consciousness than the information state theory. He doesn’t express it very well, it is not decomposed, we cannot add evidence or propositions for or against specific pieces of it. It seems like that kind of functionality is what the OP is looking for.
The functionality is already there… Bryan’s position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea?
Add the following logical implication to the TakeOnIt database (done):
“If ~p → possibly ~q” where p=”Could a computer ever be conscious?” and q=”Is cryonics worthwhile?”
Its not important—my point was I just didn’t see how to break down the argument to focus on that flaw, but apparently you can.
But to explain it Bryan’s article was a response to a discussion he had with Robin. Apparently Robin focused on neuros and uploading in the discussion—I doubt if Bryan has a full understanding of all the options available for cryo and the possible revival technologies.
Point taken. I removed the implication to question “p” per your suggestion and added implications from question q (q=”Is cryonics worthwhile?”) to the questions:
a) “Is information-theoretic death the most real interpretation of death?” b) “Is cryonic restoration technically feasible in the future?” c) “Is there life after death?”
where the implications are:
a → possibly q ~b → necessarily q c → necessarily ~q
LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who’s confident that they’re going to hell so tries to postpone eternal suffering with cryonics.
Seriously however, you’re right. Here’s another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed “c → necessarily ~q” to “c → possibly ~q”. I can’t change the wording of the question “Is there life after death” because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I’d considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn’t seem worth it.
Debate mapping is part of TakeOnIt, a publicly editable database of expert opinions introduced in a previous post ( http://lesswrong.com/lw/1kl/takeonit_database_of_expert_opinions/ ). It’s deliberately very simple. Here’s how it works:
1) Every debate is expressed as a yes-no question.
2) Every yes-no question has experts on both sides of the debate.
3) Every debate can link to a sub-debate (recursively).
A “simple debate” is one where ‘1’ and ‘2’ are sufficient. You can determine who is right in a simple debate by judging which experts have the best arguments, the best credentials, or the best track records. A good example is the vaccine debate, here:
http://www.takeonit.com/question/291.aspx
A “complex debate” is one which also requires ‘3’. This occurs when a simple debate is not sufficient to judge correctness, because the expert arguments, credentials, and track records, seem sufficiently reasonable on both sides of the debate. By recursively splitting a debate into sub-debates, a complex debate simply becomes a hierarchically structured set of simple debates. A good example is the Global Warming debate, here:
http://www.takeonit.com/question/5.aspx
The truth lies in the sub-debates. For example, in the Global Warming debate, there’s a sub-debate as to whether cosmic radiation significantly affects earth’s climate ( http://www.takeonit.com/question/74.aspx ). We have the top skeptic S. Fred Singer, head of the NIPCC (Non Governmental Panel on Climate Change), claiming that empirical evidence strongly supports his view. Yet we have a very respectable peer-reviewed paper contradicting him. In this particular case the skeptical side is significantly undermined. This process, of examining the likelihood of truth in simple sub-debates, is in my opinion, the key to finding the truth in a complex debate.
I’ve been struggling a little with the visualization and editing UI for the debate/argument maps. I feel like I’ve managed to take a nice simple concept and then totally undermine it with a confusing UI. I think I’ve been looking at it too long. I greatly welcome any feedback.
P.S. Details on creating sub-debates: This works by linking two yes-no questions together via a “logical implication”. For two questions, A and B, you can express A → B. You can also use negation, to yield the combinations: A → B, A → ~B, ~A → B, ~A → ~B. Finally, you can use the modal logic qualifiers, “possibly” A → B vs. “necessarily” A → B. It’s explained in more detail in the Implications section of the FAQ, here: http://www.takeonit.com/help.aspx
Thanks for chiming in.
TakeOnIt appears to encourage a “coarser-grained” approach to mapping a debate, compared to what I was trying to do with cryonics and how I ended up doing that in bCisive.
Its mode of operation doesn’t appear suitable for my purposes (improving discussions between people committed to truth-seeking, by exposing which parts of their belief system structures are congruent and which parts conflict; and ultimately, letting myself be convinced by arguments which are actually accurate, not just convincing).
Its raw material isn’t arguments per se, but entire worked-out positions. These worked-out positions are expressed in the usual blend of rhetoric and logic. Take for example this excerpt from the quoted position of Bryan Caplan on the “contra” side of cryonics: “If they had a ghost of a chance of giving me what I want, they wouldn’t need to twist the English language.”
There is an inference there, which a finer-grained tool would let us consider on its own, after rendering into its constituent parts: a) an observation (“cryonics advocates twist the English language”) which may or may not correspond to facts, b) an inference pattern (“people twist language to bolster untenable positions, therefore positions bolstered by twisted language tend to be untenable”) and c) a conclusion (“whatever cryonics advocates claim is an untenable position”).
The issue here is that this sentence is of course not Bryan’s entire reasoning on the matter, it’s only an excerpt from a blog post he wrote which wasn’t even intended as a potentially convincing argument, merely part of his telling a story about meeting Robin Hanson and the two of them discussing cryonics. Bryan’s actual point isn’t the above quoted (and rather low-quality) bit of argumentation, it is the assertion that “uploading doesn’t count as life extension”, and that doesn’t appear in the quote.
So, while TakeOnIt might be a valuable resource for researching on a topic for the purposes of argument mapping, I would not plan to use it for the type of work I had in mind in the top post.
Perhaps—let me know if I’m wrong—TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database:
1) An implication between the question “Is cryonics worthwhile?” and “Could a computer ever be conscious?”.
2) Bryan Caplan’s opinion on whether a computer could be conscious.
This new data now shows up in the cryonics question: http://www.takeonit.com/question/318.aspx
The cryonics question is now more “fine grained” / less “course grained”. Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I’m wrong—that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer’s comment here:
“I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff.”
Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
I don’t quite see how it works. Bryan Caplan has some other theory of identity and consciousness than the information state theory. He doesn’t express it very well, it is not decomposed, we cannot add evidence or propositions for or against specific pieces of it. It seems like that kind of functionality is what the OP is looking for.
The functionality is already there… Bryan’s position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea?
Add the following logical implication to the TakeOnIt database (done):
“If ~p → possibly ~q” where p=”Could a computer ever be conscious?” and q=”Is cryonics worthwhile?”
Er… this actually has almost no implications for cryonics. You’d just repair the old brain in situ.
Its not important—my point was I just didn’t see how to break down the argument to focus on that flaw, but apparently you can.
But to explain it Bryan’s article was a response to a discussion he had with Robin. Apparently Robin focused on neuros and uploading in the discussion—I doubt if Bryan has a full understanding of all the options available for cryo and the possible revival technologies.
Point taken. I removed the implication to question “p” per your suggestion and added implications from question q (q=”Is cryonics worthwhile?”) to the questions:
a) “Is information-theoretic death the most real interpretation of death?”
b) “Is cryonic restoration technically feasible in the future?”
c) “Is there life after death?”
where the implications are:
a → possibly q
~b → necessarily q
c → necessarily ~q
( See the result here: http://www.takeonit.com/question/318.aspx )
Don’t you mean ~b → necessarily ~q?
Also, for c, you must specify, “Is there pleasant life after death?”
Yes, it should have been ~b → necessarily ~q.
LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who’s confident that they’re going to hell so tries to postpone eternal suffering with cryonics.
Seriously however, you’re right. Here’s another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed “c → necessarily ~q” to “c → possibly ~q”. I can’t change the wording of the question “Is there life after death” because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I’d considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn’t seem worth it.
I’m not sure, but I think I heard at least one story about someone who actually did this.
Wasn’t that Paris Hilton? ;)
false alarm, she’s not signed up