Your arguments rests on trying to be clever, which Mark rejected as a means of gathering knowledge.
Do you have empiric evidence that there are cases where people did well by updating after reading fictional stories?
Are there any studies that suggest that people who update through fictional stories do better?
Studies, no. I can’t imagine studies existing today that resolve this, which of course is a huge failure of imagination: that’s a really good thing to think about. For anything high enough level, I expect to run into problems with “do better”, such as “do better at predicting the behavior of AGI” being an accessible category. I would be very excited if there were nearby categories that we could get our hands on though; I expect this is similar to the problem of developing and testing a notion of “Rationality quotient” and proving it’s effectiveness.
I’m not sure where you’re referring to with Mark rejecting cleverness as a way of gathering knowledge, but I think we may be arguing about what the human equivalent of logical uncertainty looks like? What’s the difference in this case between “cleverness” and “thinking”? (Also could you point me to the place you were talking about?)
I guess I usually think of cleverness with the negative connotation being “thinking in too much detail with a blind spot”. So could you say which part of my response you think is bad thinking, or what you instead mean by cleverness?
I see. I do think you can update based on thinking; the human analogue of the logical uncertainty I was talking about. As an aspiring mathematician, this is what I think the practice of mathematics looks, for instance.
I understand the objection that this process may fail in real life, or lead to worse outcomes, since our models aren’t purely formal and our reasoning isn’t either purely deductive or optimally Bayesian. It looks like some others have made some great comments to this article also discussing that.
I’m just confused about which thinking you’re considering bad. I’m sure I’m not understanding, because it sounds to me like “the thinking which is thinking, and not direct empirical observation.” There’s got to be some level above direct empirical observation, or you’re just an observational rock. The internal process you have which at any level approximates Bayesian reasoning is a combination of your unconscious processing and your conscious thinking.
I’m used to people picking apart arguments. I’m used to heuristics that say, “hey you’ve gone too far with abstract thinking here, and here’s an empirical way to settle it, or here’s an argument for why your abstract thinking has gone too far and you should wait for empirical evidence or do X to seek some out” But I’m not used to “your mistake was abstract thinking at all; you can do nothing but empirically observe to gain a new state of understanding”, at least with regard to things like this. I feel like I’m caricaturing, but there’s a big blank when I try to figure out what else is being said.
There are two ways you can to do reasoning.
1) You build a theory about Bayesian updating and how it should work.
2) You run studies of how humans reasons and when they reason successfully. You identify when and how human reason correctly.
If I would argue that taking a specific drug helps you with illness X, the only argument you would likely accept is an empiric study. That’s independent with whether or not you can find a flaw in casual reason of why I think drug X should help with an illness. At least if you believe in evidence-based medicine.
The reason is that in the past theory based arguments often turned out to be wrong in the field of medicine.
We don’t live in a time where we don’t have anyone doing decision science. Whether or not people are simply blinded by fiction or whether it helps reasoning is an empirical question.
Ok, I think see the core of what you’re talking about, especially “Whether or not people are simply blinded by fiction or whether it helps reasoning is an empirical question.” This sounds like an outside view versus inside view distinction: I’ve been focused on “What should my inside view look like” and using outside view tools to modify that when possible (such as knowledge of a bias from decision science.) I think you and maybe Mark are trying to say “the inside view is useless or counter-productive here; only the outside view will be of any use” so that in the absence of outside view evidence, we should simply not attempt to reason further unless it’s a super-clear case, like Mark illustrates in his other comment.
My intuition is that this is incorrect, but it reminds me of the Hanson-Yudkowsky debates on outside vs. weak inside view, and I think I don’t have a strong enough grasp to clarify my intuition sufficiently right now. I’m going to try and pay serious attention to this issue in the future though, and would appreciate if you have any references that you think might clarify.
It’s not only outside vs. inside view. It’s knowing things is really hard. Humans are by nature overconfident. Life isn’t fair. The fact that empiric evidence is hard to get doesn’t make theoretical reasoning about the issue any more likely to be correct.
I rather trust a doctor with medical experience (has an inside view) to translate empirical studies in a way that applies directly to me than someone who reasons simply based on reading the study and who has no medical experience.
I do sin from time from time and act overconfident. But that doesn’t mean it’s right. Skepticism is a virtue. I like Foersters book “Truth is the invention of a liar” (unfortunately that book is in German, and I haven’t read other writing by him).
It doesn’t really gives answers but it makes the unknowing more graspable.
Your arguments rests on trying to be clever, which Mark rejected as a means of gathering knowledge.
Do you have empiric evidence that there are cases where people did well by updating after reading fictional stories? Are there any studies that suggest that people who update through fictional stories do better?
This seems promising!
Studies, no. I can’t imagine studies existing today that resolve this, which of course is a huge failure of imagination: that’s a really good thing to think about. For anything high enough level, I expect to run into problems with “do better”, such as “do better at predicting the behavior of AGI” being an accessible category. I would be very excited if there were nearby categories that we could get our hands on though; I expect this is similar to the problem of developing and testing a notion of “Rationality quotient” and proving it’s effectiveness.
I’m not sure where you’re referring to with Mark rejecting cleverness as a way of gathering knowledge, but I think we may be arguing about what the human equivalent of logical uncertainty looks like? What’s the difference in this case between “cleverness” and “thinking”? (Also could you point me to the place you were talking about?)
I guess I usually think of cleverness with the negative connotation being “thinking in too much detail with a blind spot”. So could you say which part of my response you think is bad thinking, or what you instead mean by cleverness?
It’s detached from empirical observation. It rests on the assumption that one can gather knowledge by reasoning itself (i.e. being clever).
I see. I do think you can update based on thinking; the human analogue of the logical uncertainty I was talking about. As an aspiring mathematician, this is what I think the practice of mathematics looks, for instance.
I understand the objection that this process may fail in real life, or lead to worse outcomes, since our models aren’t purely formal and our reasoning isn’t either purely deductive or optimally Bayesian. It looks like some others have made some great comments to this article also discussing that.
I’m just confused about which thinking you’re considering bad. I’m sure I’m not understanding, because it sounds to me like “the thinking which is thinking, and not direct empirical observation.” There’s got to be some level above direct empirical observation, or you’re just an observational rock. The internal process you have which at any level approximates Bayesian reasoning is a combination of your unconscious processing and your conscious thinking.
I’m used to people picking apart arguments. I’m used to heuristics that say, “hey you’ve gone too far with abstract thinking here, and here’s an empirical way to settle it, or here’s an argument for why your abstract thinking has gone too far and you should wait for empirical evidence or do X to seek some out” But I’m not used to “your mistake was abstract thinking at all; you can do nothing but empirically observe to gain a new state of understanding”, at least with regard to things like this. I feel like I’m caricaturing, but there’s a big blank when I try to figure out what else is being said.
There are two ways you can to do reasoning. 1) You build a theory about Bayesian updating and how it should work. 2) You run studies of how humans reasons and when they reason successfully. You identify when and how human reason correctly.
If I would argue that taking a specific drug helps you with illness X, the only argument you would likely accept is an empiric study. That’s independent with whether or not you can find a flaw in casual reason of why I think drug X should help with an illness. At least if you believe in evidence-based medicine. The reason is that in the past theory based arguments often turned out to be wrong in the field of medicine.
We don’t live in a time where we don’t have anyone doing decision science. Whether or not people are simply blinded by fiction or whether it helps reasoning is an empirical question.
Ok, I think see the core of what you’re talking about, especially “Whether or not people are simply blinded by fiction or whether it helps reasoning is an empirical question.” This sounds like an outside view versus inside view distinction: I’ve been focused on “What should my inside view look like” and using outside view tools to modify that when possible (such as knowledge of a bias from decision science.) I think you and maybe Mark are trying to say “the inside view is useless or counter-productive here; only the outside view will be of any use” so that in the absence of outside view evidence, we should simply not attempt to reason further unless it’s a super-clear case, like Mark illustrates in his other comment.
My intuition is that this is incorrect, but it reminds me of the Hanson-Yudkowsky debates on outside vs. weak inside view, and I think I don’t have a strong enough grasp to clarify my intuition sufficiently right now. I’m going to try and pay serious attention to this issue in the future though, and would appreciate if you have any references that you think might clarify.
It’s not only outside vs. inside view. It’s knowing things is really hard. Humans are by nature overconfident. Life isn’t fair. The fact that empiric evidence is hard to get doesn’t make theoretical reasoning about the issue any more likely to be correct.
I rather trust a doctor with medical experience (has an inside view) to translate empirical studies in a way that applies directly to me than someone who reasons simply based on reading the study and who has no medical experience.
I do sin from time from time and act overconfident. But that doesn’t mean it’s right. Skepticism is a virtue. I like Foersters book “Truth is the invention of a liar” (unfortunately that book is in German, and I haven’t read other writing by him). It doesn’t really gives answers but it makes the unknowing more graspable.