I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.
I do it, here and elsewhere, because most of your comments seem to me entirely orthogonal to the thing they ostensibly respond to, and the charitable interpretation of that is that I’m failing to understand your responses the way you meant them, and my response to that is typically to echo back those responses as I understood them and ask you to either endorse my echo or correct it.
Which, frequently, you respond to with a yet another comment that seems to me entirely orthogonal to my request.
But I can certainly appreciate why, if you’re assuming that I’m trying to twist your words and otherwise being malicious, you’d refuse to cooperate with me in this project.
That’s fine; you’re under no obligation to cooperate, and your assumption isn’t a senseless one.
Neither am I under any obligation to keep trying to communicate in the absence of cooperation, especially when I see no way to prove my good will, especially given that I’m now rather irritated at having been treated as malicious until proven otherwise.
So, as I said, I think the best thing to do is just end this exchange here.
Not really as malicious, just it is an extremely common pattern of behaviour. People are goal driven agents and their reading is also goal driven, picking the meanings for the words as to fit some specific goal, which is surprisingly seldom understanding. Especially in a charged issue like risks of anything, where people typically choose their position via some mix of their political orientation, cynicism, etc etc etc then defend this position like a lawyer defending a client. edit: I guess it echoes the assumption that AI typically isn’t friendly if it has pre-determined goals that it optimizes towards. People typically do have pre-determined goals in discussion.
Sure. And sometimes those goals don’t involve understanding, and involve twisting other people’s words, obscuring the topic, and substituting meanings to edge the conversation towards a predefined conclusion, just as you suggest. In fact, that’s not uncommon. Agreed.
If you mean to suggest by that that I ought not be irritated by you attributing those properties to me, or that I ought not disengage from the conversation in consequence, well, perhaps you’re right. Nevertheless I am irritated, and am consequently disengaging.