I usually find that I get negative value out of “said posts many comments drilling into an author to get a specific concern resolved”. usually, if I get value from a Said comment thread, it’s one where said leaves quickly, either dissatisfied or satisfied; when Said makes many comments, it feels more like polluting the commons by inducing compute for me to figure out whether the thread is worth reading (and I usually don’t think so). if I were going to make one change to how said comments, it’s to finish threads with “okay, well, I’m done then” almost all the time after only a few comments.
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
What specific practical difference do you envision between the thing that you’re describing as what you want me to believe, and the thing that you think I currently believe? Like, what actual, concrete things do you imagine I would do differently, if your wish came true?
(EDIT: I ask this because I do not recognize, in your description, anything that seems like it accurately describes my beliefs. But maybe I’m misunderstanding you—hence the question.)
well, in this example, you are applying a pattern of “What specific practical difference do you envision”, and so I would consider you to be putting high probability on that being a good question. I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already. and so in the original claim, I was saying that you seem to have frameworks that prescribe behaviors like “what practical difference”, which are things like—at a guess—“if a suggestion isn’t specific enough to be sure I’ve interpreted correctly, ask for clarification”. I do that sometimes, but you do it more. and there are many more things like this, the more general pattern is my point.
anyway gonna follow my own instructions and cut this off here. if you aren’t able to extract useful bits from it, such as by guessing how I’d have answered if we kept going, then oh well.
I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already.
I see… well, maybe it will not surprise you to learn that, based on long and much-repeated experience, I consider that approach to be vastly inferior. In my experience, it is impossible for me to guess what anyone means, and also it is impossible for anyone else to guess what I mean. (Perhaps it is possible for other people to guess what other people mean, but what I have observed leads me to strongly doubt that, too.) Trying to do this impossible thing reliably leads to much more wasted communication. Asking is far, far superior.
In short, it is not that I haven’t considered doing things in the way that you suggest. I have considered it, and tried it, and had it tried on me, many times. My conclusion has been that it’s impossible to succeed and a very bad idea to try.
I usually find that I get negative value out of “said posts many comments drilling into an author to get a specific concern resolved”. usually, if I get value from a Said comment thread, it’s one where said leaves quickly, either dissatisfied or satisfied; when Said makes many comments, it feels more like polluting the commons by inducing compute for me to figure out whether the thread is worth reading (and I usually don’t think so). if I were going to make one change to how said comments, it’s to finish threads with “okay, well, I’m done then” almost all the time after only a few comments.
(if I get to make two, the second would be to delete the part of his principles that is totalizing, that asserts that his principles are correct and should be applied to everyone until proven otherwise, and replace it with a relaxation of that belief into an ensemble of his-choice-in-0.0001<x<0.9999-prior-probability context-specific “principle is applicable?” models, and thus can update away from the principles ever, rather than assuming anyone who isn’t following the principles is necessarily in error.)
What specific practical difference do you envision between the thing that you’re describing as what you want me to believe, and the thing that you think I currently believe? Like, what actual, concrete things do you imagine I would do differently, if your wish came true?
(EDIT: I ask this because I do not recognize, in your description, anything that seems like it accurately describes my beliefs. But maybe I’m misunderstanding you—hence the question.)
well, in this example, you are applying a pattern of “What specific practical difference do you envision”, and so I would consider you to be putting high probability on that being a good question. I would prefer you simply guess, describe your best guess, and if it’s wrong, I can then describe the correction. you having an internal autocomplete for me would lower the ratio of wasted communication between us for straightforward shannon reasons, and my intuitive model of human brains predicts you have it already. and so in the original claim, I was saying that you seem to have frameworks that prescribe behaviors like “what practical difference”, which are things like—at a guess—“if a suggestion isn’t specific enough to be sure I’ve interpreted correctly, ask for clarification”. I do that sometimes, but you do it more. and there are many more things like this, the more general pattern is my point.
anyway gonna follow my own instructions and cut this off here. if you aren’t able to extract useful bits from it, such as by guessing how I’d have answered if we kept going, then oh well.
I see… well, maybe it will not surprise you to learn that, based on long and much-repeated experience, I consider that approach to be vastly inferior. In my experience, it is impossible for me to guess what anyone means, and also it is impossible for anyone else to guess what I mean. (Perhaps it is possible for other people to guess what other people mean, but what I have observed leads me to strongly doubt that, too.) Trying to do this impossible thing reliably leads to much more wasted communication. Asking is far, far superior.
In short, it is not that I haven’t considered doing things in the way that you suggest. I have considered it, and tried it, and had it tried on me, many times. My conclusion has been that it’s impossible to succeed and a very bad idea to try.