I quoted it rather than paraphrasing because I wanted to not put words in your mouth while you were in a potentially adversarial mood
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I explicitly acknowledge that I see my self as more of a teacher than a student round these parts.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
It was a purposefully pointed and slightly unfair question. I didn’t predict that Duncan would be able to answer it well (though I hoped he would chill out give a good answer and then we could high five, or something).
If he answered in various bad ways (that I feared/predicted), then I was ready with secondary and tertiary criticisms.
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
If a person writes “I currently get A but what I really want is B”
...and then you selectively quote “I currently get A” as justification for summarizing them as being unlikely to want B...
...right after they’ve objected to you strawmanning and misrepresenting them left and right, and made it very clear to you that you are nowhere near passing their ITT...
...this is not “simplification.”
Apologizing for “over-simplifying,” under these circumstances, is a cop-out. The thing you are doing is not over-simplification. You are [not talking about simpler versions of me and my claim that abstract away some of the detail]. You are outright misrepresenting me, and in a way that’s reeeaaalll hard to believe is not adversarial, at this point.
It is at best falling so far short of cooperative discourse as to not even qualify as a member of the set, and at worst deliberate disingenuousness.
If a person wholly misses you once, that’s run-of-the-mill miscommunication.
If, after you point out all the ways they missed you, at length, they brush that off and continue confidently arguing with their cardboard cutout of you, that’s a bad sign.
If, after you again note that they’ve misrepresented you in a crucial fashion, they apologize for “over-simplifying,” they’ve demonstrated that there’s no point in trying to engage with them.
I find this unpromising, in light of the above.
I’m torn about getting into this one, since on one hand it doesn’t seem like you’re really enjoying this conversation or would be excited to continue it, and I don’t like the idea of starting conversations that feel like a drain before they even get started. In addition, other than liking my other comment on this post, you don’t really know me and therefore I don’t really have the respect/trust resources I’d normally lean on for difficult conversations like this (both in the “likely emotionally significant” and also “just large inferential distances with few words” senses).
On the other hand I think there’s something very important here, both on the object level and on a meta level about how this conversation is going so far. And if it does turn out to be a conversation you’re interested in having (either now, or in a month, or whenever), I do expect it to be actually quite productive.
If you’re interested, here’s where I’m starting:
Jennifer has explicitly stated that at this point her goal is to help you. This doesn’t seem to have happened. While it’s important to track possibilities like “Actually, it’s been more helpful than it looks”, it looks more like her attempt(s) so far have failed, and this implies that she’s missing something.
Do you have a model that gives any specific predictions about what it might be? Regardless of whether it’s worth the effort or whether doing so would lead to bad consequences in other ways, do you have a model that gives specific predictions of what it would take to convey to her the thing(s) she’s missing such that the conversation with her would go much more like you think it should, should you decide it to be worthwhile?
Would you be interested in hearing the predictions my models give?
I don’t have a gearsy model, no. All I’ve got is the observations that:
Duncan’s post objects to a cluster of things X, Y, and Z
Jennifer’s response seems to me to state that X, Y, and Z are either not worth objecting to or possibly are actually good
Jennifer’s response exhibits X, Y, and Z in substantial quantity (which, to be fair, is consistent with principled disagreement, i.e. is not a sign of hypocrisy or lack-of-skill or whatever)
Duncan’s objections to X, Y, and Z within Jennifer’s pushback are basically falling on deaf ears, resulting in Jennifer adding more X, Y, and Z in subsequent responses
As is to be expected, given that the whole motivation for the OP was “LessWrong keeps indulging in and upvoting X, Y, and Z,” Jennifer’s being upvoted.
I’m interested in hearing both your model and your predictions. Perhaps a timescale of days-weeks is better than a timescale of hours-days.
There’s a lot here, and I’ve put in a lot of work writing and rewriting. After failing for long enough to put things in a way that is both succinct and clear, I’m going to abandon hopes of the latter and go all in on the former. I’m going to use the minimal handles for the concepts I refer to, in a way similar to using LW jargon like “steelman” without the accompanying essays, in hopes that the terms are descriptive enough on their own. If this ends up being too opaque, I can explicate as needed later.
Here’s an oversimplified model to play with:
Changing minds requires attention, and bigger changes require more attentions.
Bidding for bigger attention requires bigger respect, or else no reason to follow.
Bidding for bigger respect requires bigger security, or else not safe enough to risk following.
Bidding for that sense of security requires proof of actual security, or else people react defensively, cooperation isn’t attended to, and good things don’t happen
GWS took an approach of offering proof of security and making fairly modest bids for both security and respect. As a result, the message was accepted, but it was fairly restrained in what it attempted to communicate. For example, GWS explicitly says “I do not expect that I would give you the type of feedback that Jennifer has given you here (i.e. the question the validity of your thesis variety).”
Jennifer, on the other hand, went full bore, commanding attention to places which demand lots of respect if they are to be followed, while offering little in return*. As a result, accepting this bid also requires a large degree of security, and she offered no proof that her attacks on Duncan’s ideas (it feels weird addressing you in the third person given that I am addressing this primarily to you, but it seems like it’s better looked at from an outside perspective?) would be limited to that which wouldn’t harm Duncan’s social standing here. This makes the whole bid very hard to accept, and so it was not accepted, and Duncan gave high heat responses instead.
Bolder bids like that make for much quicker work when accepted, so there is good reason to be as bold as your credit allows. One complicating factor here is that the audience is mixed, and overbidding for Duncan himself doesn’t necessarily mean the message doesn’t get through to others, so there is a trade off here between “Stay sufficiently non-threatening to maintain an open channel of cooperation with Duncan” and “Credibly convey the serious problems with Duncan’s thesis, as I see them, to all those willing to follow”.
Later, she talks about wanting to help Duncan specifically, and doesn’t seem to have done so. There are a few possible explanations for this.
1) When she said it, there might have been an implied “[I’m only going to put in a certain level of work to make things easy to hear, and beyond that I’m willing to fail]”. In this branch, the conversation between Duncan and Jennifer is going nowhere unless Duncan decides to accept at least the first bid of security. If Duncan responds without heat (and feeling heated but attempting to screen it off doesn’t count), the negotiation can pick up on the topic of whether Jennifer is worthy of that level of respect, or further up if that is granted too.
2) It’s possible that she lacks a good and salient picture of what it looks like to recover from over-bidding, and just doesn’t have a map to follow. In this branch, demonstrating what that might look like would likely result in her doing it and recovering things. In particular, this means pacing Duncan’s objections without (necessarily) agreeing with them until Duncan feels that she has passed his ITT and trusts her intent to cooperate and collaborate rather than to tear him down.
3) It could also be that she’s got her own little hang up on the issue of “respect”, which caused a blind spot here. I put an asterisk there earlier, because she was only showing “little respect” in one sense, while showing a lot in another. If you say to someone “Lol, your ideas are dumb”, it’s not showing a lot of respect for those ideas of theirs. To the extent that they afford those same ideas a lot of respect, it sounds a lot like not respecting them, since you’re also shitting on their idea of how valuable those ideas are and therefore their judgement itself. However, if you say to someone “Lol, your ideas are dumb” because you expect them to be able to handle such overt criticism and either agree or prove you wrong, then it is only tentatively disrespectful of those ideas and exceptionally and unusually respectful of the person themselves.
She explicitly points at this when she says “Duncan is a special case. I’m not treating him like a student, I’m treating him like an equal”, and then hints at a blind spot when she says (emphasis her own) “who should be able to manage himself and his own emotions”—translating to my model, “manage himself and his emotions” means finding security and engaging with the rest of the bids on their own merits unobstructed by defensive heat. “Should” often points at a willful refusal to update ones map to what “is”, and instead responding to it by flinching at what isn’t as it “should” be. This isn’t necessarily a mistake (in the same way that flinching away from a hot stove isn’t a mistake), and while she does make other related comments elsewhere in the thread, there’s no clear indication of whether this is a mistake or a deliberate decision to limit her level of effort there. If it is a mistake, then it’s likely “I don’t like having to admit that people don’t demonstrate as much security as I think they should, and I don’t wanna admit that it’s a thing that is going to stay real and problematic even when I flinch at it”. Another prediction is that to the extent that it is this, and she reads this comment, this error will go away.
I don’t want to confuse my personal impression with the conditional predictions of the model itself, but I do think it’s worth noting that I personally would grant the bid for respect. Last time I laughed off something that she didn’t agree should be laughed off, it took me about five years to realize that I was wrong. Oops.
Just checking, what are X, Y and Z?
(I’m interested in a concrete answer but would be happy with a brief vague answer too!)
(Added: Please don’t feel obliged to write a long explanation here just because I asked, I really just wanted to ask a small question.)
The same stuff that’s outlined in the post, both up at the top where I list things my brain tries to do, and down at the bottom where I say “just the basics, consistently done.”
Regenerating the list again:
Engaging in, and tolerating/applauding those who engage in:
Strawmanning (misrepresenting others’ points as weaker or more extreme than they are)
Projection (speaking as if you know what’s going on inside other people’s heads)
Putting little to no effort into distinguishing your observations from your inferences/speaking as if things definitely are what they seem to you to be
Only having or tracking a single hypothesis/giving no signal that there is more than one explanation possible for what you’ve observed
Overstating the strength of your claims
Being much quieter in one’s updates and oopses than one was in one’s bold wrongness
Weaponizing equivocation/doing motte-and-bailey
Generally, doing things which make it harder rather than easier for people to see clearly and think clearly and engage with your argument and move toward the truth
This is not an exhaustive list.
Mechanistically… since stag hunt is in the title of the post… it seems like you’re saying that any one person committing “enough of these epistemic sins to count as playing stag” would mean that all of lesswrong fails at the stag hunt, right?
And it might be the case that a single person playing stag could be made up of them failing at even just a single one of these sins? (This is the weakest point in my mechanistic model, perhaps?)
Also, what you’re calling “projection” there is not the standard model of projection I think? And my understanding is that the standard model of projection is sort of explicitly something people can’t choose not to do, by default. In the standard model of projection it takes a lot of emotional and intellectual work for a person to realize that they are blaming others for problems that are really inside themselves :-(
(For myself, I try not to assume I even know what’s happening in my own head, because experimentally, it seems like humans in general lack high quality introspective access to their own behavior and cognition.)
The practical upshot here, to me, is that if the models you’re advocating here are true, then it seems to me like lesswrong will inevitably fail at “hunting stags”.
...
And yet it also seems like you’re exhorting people to stop committing these sins and exhorting them moreover to punitively downvote people according to these standards because if LW voters become extremely judgemental like this then… maybe we will eventually all play stag and thus eventually, as a group, catch a stag?
So under the models that you seem to me to have offered, the (numerous individual) costs won’t buy any (group) benefits? I think?
There will always inevitably be a fly in the ointment… a grain of sand in the chip fab… a student among the masters… and so the stag hunt will always fail unless it occurs in extreme isolation with a very small number of moving parts of very high quality?
And yet lesswrong will hopefully always have an influx of new people who are imperfect, but learning and getting better!
And that’s (in my book) quite good… even if it means we will always fail at hunting stags.
...
The thing I think that’s good about lesswrong has almost nothing to do with bringing down a stag on this actual website.
Instead, the thing I think is good about lesswrong has to do with creating a stable pipeline of friendly people who are all, over time, are getting a little better at thinking, so they can “do more good thinking” in their lives, and businesses, and non-profits, and perhaps from within government offices, and so on.
I’m (I hope) realistically hoping for lots of little improvements, in relative isolation, based on cross-fertilization among cool people, with tolerance for error, and sharing of ideas, and polishing stuff over time… Not from one big leap based on purified perfect cooperation (which is impossible anyway for large groups).
You’re against “engaging in, and tolerating/applauding” lots and lots of stuff, while I think that most of the actual goodness arises specifically from our tolerant engagement of people making incremental progress, and giving them applause for any such incremental improvements, despite our numerous inevitable imperfections.
Am I missing something? What?
I am confused by a theme in your comments. You have repeatedly chosen to express that the failure of a single person completely destroys all the value of the website, even going so far as to quote ridiculous numbers (at the order of E-18 [1]) in support of this.
The only model I have for your behavior that explains why you would do this, instead of assuming something like Duncan believing something like “The value of C cooperators and D defectors is min(0,C−D2)” is that you are trying to make the argument look weak. If there is another reason to do this, I’d appreciate an explanation, because this tactic alone is enough to make me view the argument as likely adversarial.
No, and if you had stopped there and let me answer rather than going on to write hundreds of words based on your misconception, I would have found it more credible that you actually wanted to engage with me and converge on something, rather than that you just really wanted to keep spamming misrepresentations of my point in the form of questions.
Epistemic status: socially brusque wild speculation. If they’re in the area and it wouldn’t be high effort, I’d like JenniferRM’s feedback on how close I am.
My model of JenniferRM isn’t of someone who wants to spam misrepresentations in the form of questions. In response to Dweomite’s comment below, they say:
My model of the model which which outputs words like these is that they’re very confident in their own understanding—viewing themself as a “teacher” rather than a student—and are trying to lead someone who they think doesn’t understand by the nose through a conversation which has been plotted out in advance.
Plausible to me. (Thanks.)