I can’t tell if you’re saying to support Bob’s bad idea and to (falsely) encourage him that it’s actually a good idea. I don’t agree, if so. If you’re just saying “continue supporting his previous good ideas, and evaluate his future ideas fairly, knowing that he’s previously had both good and bad ones”, then I agree. But I don’t think it’s particularly controversial or novel.
I’m not sure if more or different examples would help. I suspect my model of idea generation and support just doesn’t fit here—I don’t much care whether an idea is generated/supported by any given Bob. I care a bit about the overall level of support for the idea from many Bob-like people. And I care about the ideas themselves.
Also, almost every Bob has topics they’re great at and topics they’re … questionable. I strongly advise identifying the areas in which you don’t waste time from some otherwise-great thinkers.
I am not saying to falsely encourage him, I think I am mostly saying to continue giving him some attention/platform to get his ideas out in a way that would be heard. The real thing that I want is whatever will cause Bob to not end up back propagating from the group epistemics into his individual idea generation.
I think I’m largely (albeit tentatively) with Dagon here: it’s not clear that we don’t _want_ our responses to his wrongness to back-propagate into his idea generation. Isn’t that part of how a person’s idea generation gets better?
One possible counterargument: a person’s idea-generation process actually consists of (at least) two parts, generation and filtering, and most of us would do better to have more fluent _generation_. But even if so, we want the _filtering_ to work well, and I don’t know how you enable evaluations to propagate back as far as the filtering stage but to stop before affecting the generation stage.
I’m not saying that the suggestion here is definitely wrong. It could well be that if we follow the path of least resistance, the result will be _too much_ idea-suppression. But you can’t just say “if there’s a substantial cost to saying very wrong things then that’s bad because it may make people less willing or even less able to come up with contrarian ideas in future” without acknowledging that there’s an upside too, in making people less inclined to come up with _bad_ ideas in future.
I think I’m largely (albeit tentatively) with Dagon here: it’s not clear that we don’t _want_ our responses to his wrongness to back-propagate into his idea generation. Isn’t that part of how a person’s idea generation gets better?
It is important that Bob was surprisingly right about something in the past; this means something was going on in his epistemology that wasn’t going on in the group epistemology, and the group’s attempt to update Bob may fail because it misses that important structure. Epistemic tenure is, in some sense, the group saying to Bob “we don’t really get what’s going on with you, and we like it, so keep it up, and we’ll be tolerant of wackiness that is the inevitable byproduct of keeping it up.”
That is, a typical person should care a lot about not believing bad things, and the typical ‘intellectual venture capitalist’ who backs a lot of crackpot horses should likely end up losing their claim on the group’s attention. But when the intellectual venture capitalist is right, it’s important to keep their strategy around, even if you think it’s luck or that you’ve incorporated all of the technique that went into their first prediction, because maybe you haven’t, and their value comes from their continued ability to be a maverick without losing all of their claim on group attention.
If Bob’s history is that over and over again he’s said things that seem obviously wrong but he’s always turned out to be right, I don’t think we need a notion of “epistemic tenure” to justify taking him seriously the next time he says something that seems obviously wrong: we’ve already established that when he says apparently-obviously-wrong things he’s usually right, so plain old induction will get us where we need to go. I think the OP is making a stronger claim. (And a different one: note that OP says explicitly that he isn’t saying we should take Bob seriously because he might be right, but that we should take Bob seriously so as not to discourage him from thinking original thoughts in future.)
And the OP doesn’t (at least as I read it) seem like it stipulates that Bob is strikingly better epistemically than his peers in that sort of way. It says:
Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob has a history of believing true things long before anyone else, or Bob has discovered or invented some ideas that I have found very useful.
which isn’t quite the same. One of the specific ways in which Bob might have earned that “lot of intellectual respect” is by believing true things long before everyone else, but that’s just one example. And someone can merit a lot of intellectual respect without being so much better than everyone else.
For an “intellectual venture capitalist” who generates a lot of wild ideas, mostly wrong but right more often than you’d expect, I do agree that we want to avoid stifling them. But we do also want to avoid letting them get entirely untethered from reality, and it’s not obvious to me what degree of epistemic tenure best makes that balance.
(Analogy: successful writers get more freedom to ignore the advice of their editors. Sometimes that’s a good thing, but not always.)
I think you’re more focused on Bob than I am, and have more confidence in your model of Bob’s idea generation/propagation mechanisms.
I WANT Bob to update toward more correct ideas in the future, and that includes feedback when he’s wrong. And I want to correctly adjust my prior estimate of Bob’s future correctness. Both of these involve recognizing that errors occurred, and reducing (not to zero, but not at the level of the previous always-correct Bob) the expectation of future goodness.
Currently 1, with 4 votes. My other comment on the post is at 2 with 5 votes. This is below target for me, but not enough that I’m likely to change much. Note that I don’t care much about Karma totals, more about replies and further discussion. I have in the past announced that I believe that I intend my comments to be true beliefs, but also to provoke further reaction/correction. One measure of this is to seek to comment in ways that attract some number of downvotes.
Also, there’s no irony if the downvoters do not believe I’ve earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.
Also, there’s no irony if the downvoters do not believe I’ve earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.
You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.
I can’t tell if you’re saying to support Bob’s bad idea and to (falsely) encourage him that it’s actually a good idea. I don’t agree, if so. If you’re just saying “continue supporting his previous good ideas, and evaluate his future ideas fairly, knowing that he’s previously had both good and bad ones”, then I agree. But I don’t think it’s particularly controversial or novel.
I’m not sure if more or different examples would help. I suspect my model of idea generation and support just doesn’t fit here—I don’t much care whether an idea is generated/supported by any given Bob. I care a bit about the overall level of support for the idea from many Bob-like people. And I care about the ideas themselves.
Also, almost every Bob has topics they’re great at and topics they’re … questionable. I strongly advise identifying the areas in which you don’t waste time from some otherwise-great thinkers.
I am not saying to falsely encourage him, I think I am mostly saying to continue giving him some attention/platform to get his ideas out in a way that would be heard. The real thing that I want is whatever will cause Bob to not end up back propagating from the group epistemics into his individual idea generation.
I think I’m largely (albeit tentatively) with Dagon here: it’s not clear that we don’t _want_ our responses to his wrongness to back-propagate into his idea generation. Isn’t that part of how a person’s idea generation gets better?
One possible counterargument: a person’s idea-generation process actually consists of (at least) two parts, generation and filtering, and most of us would do better to have more fluent _generation_. But even if so, we want the _filtering_ to work well, and I don’t know how you enable evaluations to propagate back as far as the filtering stage but to stop before affecting the generation stage.
I’m not saying that the suggestion here is definitely wrong. It could well be that if we follow the path of least resistance, the result will be _too much_ idea-suppression. But you can’t just say “if there’s a substantial cost to saying very wrong things then that’s bad because it may make people less willing or even less able to come up with contrarian ideas in future” without acknowledging that there’s an upside too, in making people less inclined to come up with _bad_ ideas in future.
It is important that Bob was surprisingly right about something in the past; this means something was going on in his epistemology that wasn’t going on in the group epistemology, and the group’s attempt to update Bob may fail because it misses that important structure. Epistemic tenure is, in some sense, the group saying to Bob “we don’t really get what’s going on with you, and we like it, so keep it up, and we’ll be tolerant of wackiness that is the inevitable byproduct of keeping it up.”
That is, a typical person should care a lot about not believing bad things, and the typical ‘intellectual venture capitalist’ who backs a lot of crackpot horses should likely end up losing their claim on the group’s attention. But when the intellectual venture capitalist is right, it’s important to keep their strategy around, even if you think it’s luck or that you’ve incorporated all of the technique that went into their first prediction, because maybe you haven’t, and their value comes from their continued ability to be a maverick without losing all of their claim on group attention.
If Bob’s history is that over and over again he’s said things that seem obviously wrong but he’s always turned out to be right, I don’t think we need a notion of “epistemic tenure” to justify taking him seriously the next time he says something that seems obviously wrong: we’ve already established that when he says apparently-obviously-wrong things he’s usually right, so plain old induction will get us where we need to go. I think the OP is making a stronger claim. (And a different one: note that OP says explicitly that he isn’t saying we should take Bob seriously because he might be right, but that we should take Bob seriously so as not to discourage him from thinking original thoughts in future.)
And the OP doesn’t (at least as I read it) seem like it stipulates that Bob is strikingly better epistemically than his peers in that sort of way. It says:
which isn’t quite the same. One of the specific ways in which Bob might have earned that “lot of intellectual respect” is by believing true things long before everyone else, but that’s just one example. And someone can merit a lot of intellectual respect without being so much better than everyone else.
For an “intellectual venture capitalist” who generates a lot of wild ideas, mostly wrong but right more often than you’d expect, I do agree that we want to avoid stifling them. But we do also want to avoid letting them get entirely untethered from reality, and it’s not obvious to me what degree of epistemic tenure best makes that balance.
(Analogy: successful writers get more freedom to ignore the advice of their editors. Sometimes that’s a good thing, but not always.)
I think you’re more focused on Bob than I am, and have more confidence in your model of Bob’s idea generation/propagation mechanisms.
I WANT Bob to update toward more correct ideas in the future, and that includes feedback when he’s wrong. And I want to correctly adjust my prior estimate of Bob’s future correctness. Both of these involve recognizing that errors occurred, and reducing (not to zero, but not at the level of the previous always-correct Bob) the expectation of future goodness.
Just want to check that whoever downvoted Dagon’s comment sees the irony? :)
(Context: At time of writing the parent comment was at −1 karma)
Currently 1, with 4 votes. My other comment on the post is at 2 with 5 votes. This is below target for me, but not enough that I’m likely to change much. Note that I don’t care much about Karma totals, more about replies and further discussion. I have in the past announced that I believe that I intend my comments to be true beliefs, but also to provoke further reaction/correction. One measure of this is to seek to comment in ways that attract some number of downvotes.
Also, there’s no irony if the downvoters do not believe I’ve earned any epistemic respect from previous comments, so they do not want to encourage my further commenting.
You’re right of course, I just found it amusing that someone would disagree that it’s a good idea to provide negative feedback and then provide negative feedback.