TakeOnIt appears to encourage a “coarser-grained” approach to mapping a debate, compared to what I was trying to do with cryonics and how I ended up doing that in bCisive.
Its mode of operation doesn’t appear suitable for my purposes (improving discussions between people committed to truth-seeking, by exposing which parts of their belief system structures are congruent and which parts conflict; and ultimately, letting myself be convinced by arguments which are actually accurate, not just convincing).
Its raw material isn’t arguments per se, but entire worked-out positions. These worked-out positions are expressed in the usual blend of rhetoric and logic. Take for example this excerpt from the quoted position of Bryan Caplan on the “contra” side of cryonics: “If they had a ghost of a chance of giving me what I want, they wouldn’t need to twist the English language.”
There is an inference there, which a finer-grained tool would let us consider on its own, after rendering into its constituent parts: a) an observation (“cryonics advocates twist the English language”) which may or may not correspond to facts, b) an inference pattern (“people twist language to bolster untenable positions, therefore positions bolstered by twisted language tend to be untenable”) and c) a conclusion (“whatever cryonics advocates claim is an untenable position”).
The issue here is that this sentence is of course not Bryan’s entire reasoning on the matter, it’s only an excerpt from a blog post he wrote which wasn’t even intended as a potentially convincing argument, merely part of his telling a story about meeting Robin Hanson and the two of them discussing cryonics. Bryan’s actual point isn’t the above quoted (and rather low-quality) bit of argumentation, it is the assertion that “uploading doesn’t count as life extension”, and that doesn’t appear in the quote.
So, while TakeOnIt might be a valuable resource for researching on a topic for the purposes of argument mapping, I would not plan to use it for the type of work I had in mind in the top post.
Perhaps—let me know if I’m wrong—TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database:
1) An implication between the question “Is cryonics worthwhile?” and “Could a computer ever be conscious?”. 2) Bryan Caplan’s opinion on whether a computer could be conscious.
The cryonics question is now more “fine grained” / less “course grained”. Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I’m wrong—that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer’s comment here:
“I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff.”
Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
I don’t quite see how it works. Bryan Caplan has some other theory of identity and consciousness than the information state theory. He doesn’t express it very well, it is not decomposed, we cannot add evidence or propositions for or against specific pieces of it. It seems like that kind of functionality is what the OP is looking for.
The functionality is already there… Bryan’s position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea?
Add the following logical implication to the TakeOnIt database (done):
“If ~p → possibly ~q” where p=”Could a computer ever be conscious?” and q=”Is cryonics worthwhile?”
Its not important—my point was I just didn’t see how to break down the argument to focus on that flaw, but apparently you can.
But to explain it Bryan’s article was a response to a discussion he had with Robin. Apparently Robin focused on neuros and uploading in the discussion—I doubt if Bryan has a full understanding of all the options available for cryo and the possible revival technologies.
Point taken. I removed the implication to question “p” per your suggestion and added implications from question q (q=”Is cryonics worthwhile?”) to the questions:
a) “Is information-theoretic death the most real interpretation of death?” b) “Is cryonic restoration technically feasible in the future?” c) “Is there life after death?”
where the implications are:
a → possibly q ~b → necessarily q c → necessarily ~q
LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who’s confident that they’re going to hell so tries to postpone eternal suffering with cryonics.
Seriously however, you’re right. Here’s another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed “c → necessarily ~q” to “c → possibly ~q”. I can’t change the wording of the question “Is there life after death” because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I’d considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn’t seem worth it.
Thanks for chiming in.
TakeOnIt appears to encourage a “coarser-grained” approach to mapping a debate, compared to what I was trying to do with cryonics and how I ended up doing that in bCisive.
Its mode of operation doesn’t appear suitable for my purposes (improving discussions between people committed to truth-seeking, by exposing which parts of their belief system structures are congruent and which parts conflict; and ultimately, letting myself be convinced by arguments which are actually accurate, not just convincing).
Its raw material isn’t arguments per se, but entire worked-out positions. These worked-out positions are expressed in the usual blend of rhetoric and logic. Take for example this excerpt from the quoted position of Bryan Caplan on the “contra” side of cryonics: “If they had a ghost of a chance of giving me what I want, they wouldn’t need to twist the English language.”
There is an inference there, which a finer-grained tool would let us consider on its own, after rendering into its constituent parts: a) an observation (“cryonics advocates twist the English language”) which may or may not correspond to facts, b) an inference pattern (“people twist language to bolster untenable positions, therefore positions bolstered by twisted language tend to be untenable”) and c) a conclusion (“whatever cryonics advocates claim is an untenable position”).
The issue here is that this sentence is of course not Bryan’s entire reasoning on the matter, it’s only an excerpt from a blog post he wrote which wasn’t even intended as a potentially convincing argument, merely part of his telling a story about meeting Robin Hanson and the two of them discussing cryonics. Bryan’s actual point isn’t the above quoted (and rather low-quality) bit of argumentation, it is the assertion that “uploading doesn’t count as life extension”, and that doesn’t appear in the quote.
So, while TakeOnIt might be a valuable resource for researching on a topic for the purposes of argument mapping, I would not plan to use it for the type of work I had in mind in the top post.
Perhaps—let me know if I’m wrong—TakeOnIt argumentation is more fine-grained that it initially seems. To illustrate, I just added to the TakeOnIt database:
1) An implication between the question “Is cryonics worthwhile?” and “Could a computer ever be conscious?”.
2) Bryan Caplan’s opinion on whether a computer could be conscious.
This new data now shows up in the cryonics question: http://www.takeonit.com/question/318.aspx
The cryonics question is now more “fine grained” / less “course grained”. Of course, you can continue to add more sub-debates to the cryonics question, to make it even more fine grained. Is this sufficiently fine-grained to be useful to you? I have a strong intuition - once again perhaps I’m wrong—that a system for collaborative argument mapping has to be pretty darn simple in order to work. I resonate with Eliezer’s comment here:
“I suspect it will not have lots of fancy argument types and patterns, because no one really uses that stuff.”
Is this not true? If not, then what would you like to see added to TakeOnIt to make it more useful to you?
I don’t quite see how it works. Bryan Caplan has some other theory of identity and consciousness than the information state theory. He doesn’t express it very well, it is not decomposed, we cannot add evidence or propositions for or against specific pieces of it. It seems like that kind of functionality is what the OP is looking for.
The functionality is already there… Bryan’s position on cryonics is at least partly based on his doubts regarding conscious computers. How do we represent this idea?
Add the following logical implication to the TakeOnIt database (done):
“If ~p → possibly ~q” where p=”Could a computer ever be conscious?” and q=”Is cryonics worthwhile?”
Er… this actually has almost no implications for cryonics. You’d just repair the old brain in situ.
Its not important—my point was I just didn’t see how to break down the argument to focus on that flaw, but apparently you can.
But to explain it Bryan’s article was a response to a discussion he had with Robin. Apparently Robin focused on neuros and uploading in the discussion—I doubt if Bryan has a full understanding of all the options available for cryo and the possible revival technologies.
Point taken. I removed the implication to question “p” per your suggestion and added implications from question q (q=”Is cryonics worthwhile?”) to the questions:
a) “Is information-theoretic death the most real interpretation of death?”
b) “Is cryonic restoration technically feasible in the future?”
c) “Is there life after death?”
where the implications are:
a → possibly q
~b → necessarily q
c → necessarily ~q
( See the result here: http://www.takeonit.com/question/318.aspx )
Don’t you mean ~b → necessarily ~q?
Also, for c, you must specify, “Is there pleasant life after death?”
Yes, it should have been ~b → necessarily ~q.
LOL. The idea that someone might actually expect an unpleasant life after death reminds me of some sort of twisted comic plot: the protagonist who’s confident that they’re going to hell so tries to postpone eternal suffering with cryonics.
Seriously however, you’re right. Here’s another possible qualification: are we talking about a finite or infinite life after death? In light of these possibilities, I changed “c → necessarily ~q” to “c → possibly ~q”. I can’t change the wording of the question “Is there life after death” because that question in its simple general form is already used in many other contexts on TakeOnIt. At one point I’d considered allowing annotating an implication (e.g. to express qualifications, exceptions, etc.), but the complexity of the feature didn’t seem worth it.
I’m not sure, but I think I heard at least one story about someone who actually did this.
Wasn’t that Paris Hilton? ;)
false alarm, she’s not signed up