I’m feeling demoralized by Ben and Scott’s comments (and Christian’s), which I interpret as being primarily framed as “in opposition to the OP and the worldview that generated it,” and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.
Both Scott’s and Ben’s thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.
But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they’re three separate conversations, rather than it being clear that I said “X” and Ben said “By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X” and Scott said “And I have a lot of thoughts about X’ and X″.”
Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don’t want to have to defend against them, but feel like if I don’t, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they’re specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there’s a difference between “what I assert Policy X will degenerate to, given [a, b, c] about the human condition” and “Policy X.”
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)
And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott’s proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They’re certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea’s tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.
EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I’m requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being “a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts” just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don’t know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
I believe that I could not pass your ITT. I believe I am projecting some views onto you, in order engage with them in my head (and publicly so you can engage if you want). I guess I have a Duncan-model that I am responding to here, but I am not treating that Duncan-model as particularly truth tracking. It is close enough that it makes sense (to me) to call it a Duncan-model, but its primary purpose in me is not for predicting Duncan, but rather for being there to engage with on various topics.
I suspect that being a better model would help it serve this purpose, and would like to make it better, but I am not requesting that.
I notice that I used different words in my header “Scott’s model of Duncan’s beliefs,” I think that this reveals something, but it certainly isn’t clear, “belief” is for true things, “models” are toys for generating things.
I think that in my culture, having a not-that-truth-tracking Duncan-model that I want to engage my ideas with is a sign of respect. I think I don’t do that with that many people (more than 10, but less than 50, I think). I also do it with a bunch of concepts, like “Simic,” or “Logical Induction.” The best models according to me are not the ones that are the most accurate, as much as the ones that are most generally applicable. Rounding off the model makes it fit in more places.
However, I can imagine that maybe in your culture it is something like objectification, which causes you to not be taken seriously. Is this true?
If you are curious about what kind of things my Duncan-model says, I might be able to help you built a (Scott’s-Duncan-Model)-Model. In one short phase, I think I often round you off as an avatar of “respect,” but even my bad model has more nuance than just the word “respect”.
I imagine that you are imagining my comment as a minor libel about you, by contributing to a shared narrative in which you are something that you are not. I am sad to the extent that it has that effect. I am not sure what to do about that. (I could send things like this in private messages, that might help).
However, I want to point out that I am often not asking people to update from my claims. That is often an unfortunate side effect. I want to play with my Duncan-model. I want you to see what I build with it, and point out where it is not correctly tracking what Duncan would actually say. (If that is something you want) I also want to do this in a social context. I want my model to be correct, so that I can learn more from it, but I want to relinquish any responsibility for it being correct. (I am up for being convinced that I should take on that responsibility, either as a general principal, or as a cooperative action towards you.)
Feel free to engage or not.
PS: The above is very much responding to my Duncan-model, rather than what you are actually saying. I reread your above comment, and my comment, and it seems like I am not responding to you at all. I still wanted to share the above text with you.
And I mean the word “maybe” in the above sentence. I am saying the sentence not to express any disagreement, but to play with a conjecture that I am curious about.
For the record I was planning a reply to Scott saying something like “This seems true, and seemed compatible with my interpretation of the OP, which I think went out of it’s way to be pretty well caveated.”
I didn’t end up writing that comment yet in part because I did feel something like “something going on in Scott’s post feels relevant to The Other FB Discussion”, and wanted to acknowledge that, but that seemed to be going down a conversational path that I expected to be exhausted by, and then I wasn’t sure what to do and bounced off.
Yep, I totally agree that it is a riff. I think that I would have put it in response to the poll about how important it is for karma to track truth, if not for the fact that I don’t like to post on Facebook.
I’m feeling demoralized by Ben and Scott’s comments (and Christian’s), which I interpret as being primarily framed as “in opposition to the OP and the worldview that generated it,” and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.
Both Scott’s and Ben’s thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.
But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they’re three separate conversations, rather than it being clear that I said “X” and Ben said “By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X” and Scott said “And I have a lot of thoughts about X’ and X″.”
Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don’t want to have to defend against them, but feel like if I don’t, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they’re specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there’s a difference between “what I assert Policy X will degenerate to, given [a, b, c] about the human condition” and “Policy X.”
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)
And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott’s proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They’re certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea’s tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.
EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I’m requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being “a response to Scott’s model of Duncan’s beliefs about how epistemic communities work, and a couple of Duncan’s recent Facebook posts” just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don’t know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
I believe that I could not pass your ITT. I believe I am projecting some views onto you, in order engage with them in my head (and publicly so you can engage if you want). I guess I have a Duncan-model that I am responding to here, but I am not treating that Duncan-model as particularly truth tracking. It is close enough that it makes sense (to me) to call it a Duncan-model, but its primary purpose in me is not for predicting Duncan, but rather for being there to engage with on various topics.
I suspect that being a better model would help it serve this purpose, and would like to make it better, but I am not requesting that.
I notice that I used different words in my header “Scott’s model of Duncan’s beliefs,” I think that this reveals something, but it certainly isn’t clear, “belief” is for true things, “models” are toys for generating things.
I think that in my culture, having a not-that-truth-tracking Duncan-model that I want to engage my ideas with is a sign of respect. I think I don’t do that with that many people (more than 10, but less than 50, I think). I also do it with a bunch of concepts, like “Simic,” or “Logical Induction.” The best models according to me are not the ones that are the most accurate, as much as the ones that are most generally applicable. Rounding off the model makes it fit in more places.
However, I can imagine that maybe in your culture it is something like objectification, which causes you to not be taken seriously. Is this true?
If you are curious about what kind of things my Duncan-model says, I might be able to help you built a (Scott’s-Duncan-Model)-Model. In one short phase, I think I often round you off as an avatar of “respect,” but even my bad model has more nuance than just the word “respect”.
I imagine that you are imagining my comment as a minor libel about you, by contributing to a shared narrative in which you are something that you are not. I am sad to the extent that it has that effect. I am not sure what to do about that. (I could send things like this in private messages, that might help).
However, I want to point out that I am often not asking people to update from my claims. That is often an unfortunate side effect. I want to play with my Duncan-model. I want you to see what I build with it, and point out where it is not correctly tracking what Duncan would actually say. (If that is something you want) I also want to do this in a social context. I want my model to be correct, so that I can learn more from it, but I want to relinquish any responsibility for it being correct. (I am up for being convinced that I should take on that responsibility, either as a general principal, or as a cooperative action towards you.)
Feel free to engage or not.
PS: The above is very much responding to my Duncan-model, rather than what you are actually saying. I reread your above comment, and my comment, and it seems like I am not responding to you at all. I still wanted to share the above text with you.
Anyway, my reaction to the actual post is:
“Yep, Overconfidence is Deceit. Deceit is bad.”
However, reading your post made me think about how maybe your right to not be deceived is trumped by my right to be incorrect.
And I mean the word “maybe” in the above sentence. I am saying the sentence not to express any disagreement, but to play with a conjecture that I am curious about.
For the record I was planning a reply to Scott saying something like “This seems true, and seemed compatible with my interpretation of the OP, which I think went out of it’s way to be pretty well caveated.”
I didn’t end up writing that comment yet in part because I did feel something like “something going on in Scott’s post feels relevant to The Other FB Discussion”, and wanted to acknowledge that, but that seemed to be going down a conversational path that I expected to be exhausted by, and then I wasn’t sure what to do and bounced off.
Yep, I totally agree that it is a riff. I think that I would have put it in response to the poll about how important it is for karma to track truth, if not for the fact that I don’t like to post on Facebook.