I believe you now when you say you don’t have the intention I’d assumed; I trust you plenty well enough to believe such a sentence. I had detected significant intentionality towards attempting to destroy, and your resistance to clarifying what metanetwork you’d be participating in set off my alarm bells. I now recognize that what you appear to me to have read as my intentionality, and what I read as yours, was probably both of us recognizing each others’ behavior as executing network code (egregore fragments, if you like the woo phrasing rather than the distributed systems phrasing) that would attempt to demand each other participate in the same network.
I’m cool with it if you’re not in the network that has been repeatedly insulted as “woke” lately. I don’t need you to agree. I don’t intend to make any demand of you besides to recognize that I’m not actually intending to make demands or attack when I misread your intention—please understand that the reason my mental netcode detected your behavior as agentic still seems to have been justified at the time. My read that you were guarding against pressure from my netcode still seems right, but since I already have a model of your base-level trustworthiness, and your now having directly asserted no agency towards destruction, I’m more willing to believe you’re not intending to destroy the good part of what you label “woke”.
So with that in mind—I do think that the network behavior seen as woke in disney (for example) is very unhealthy and worthy of analysis to repair. I might term it, to make up a name on the fly, “getting high on the woke”, to contrast with actually trying to improve the world in the direction that subtlety-awareness that got labeled “woke” can provide.
If we’re going to get anywhere, I expect we need to apply a “no command validity” filter to each other’s posts. I won’t accept your commands and you won’t accept mine, and then we can get somewhere. But I’m not able to remove everything you see as a command, apparently, because statements about how I read your interactions appear to me to be reading as commands to you. That’s not my intention; my intention was to express annoyance, not to assert my annoyance is necessarily and unavoidably valid, and your comments asserting that you’re not being agentic to attack sneakily reassure me a lot because I trust you to be honest about an explicit statement like that.
I expect my comment will take several hours or maybe a day+ to process into a form that can generate a response that you trust to not be attacked. No worries if so. Or if you feel you can respond with only fast inference mode, that’s fine too, I don’t need you to take forever to respond, I only offer it in case it clarifies friendliness that you need not rush for me.
I still do feel that the core intention of this post was rather a disaster and that we can’t reasonably establish which answers are accurate without a lot more on-the-ground data collection.
I like the spirit with which you’re meeting me here.
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
But thank you for this. This feels quite good to me. And I’m grateful for you meeting me this way.
RE “no command validity”: Basically just… yes? I totally agree with where I think you’re pointing there as a guideline. I’m gonna post something soon that’ll add the detail that I’d otherwise add here. (Not in response to you. It just so happens to be related and relevant.)
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
Understandable! No worries at all. I’ll take your message as a fin, and this message as a fin-ack; before, I thought we were headed towards connection timeout, so it’s very pleasing to have a mutually acknowledged friendly interaction ending. Glad we had this talk, sorry to have amplified the fight.
FWIW, for your thinking, if it’s useful—I think the very problem we ran into here is inherently the biggest issue in distributed systems safety for humans itself: how do you explain yourself to a group that is severely divided, to the point that the fight has started leading to beings choosing to disconnect their meanings.
Would love to talk through distributed systems safety with you at some point, though probably not now in this thread, for various reasons; but I’m hopeful that my ideas are shortly obvious enough that I simply won’t even have to, it seems like deepmind may yet again scoop me, and if deepmind can scoop me on how ai can help solve social issues, I don’t think there’s any way I’d be happier to be disappointed; I claim you may be surprised by being scooped on your human friendliness work by ai friendliness researchers shortly too. The general gist of my hunch is, agentic coprotection is reachable, and consent-to-have-shared-meaning may itself be a fundamental component of ai safety—that is, something along the lines of consent to establish mutual information. Or something. it’s a research project because I’m pretty sure that’s not enough to specify it.
I believe you now when you say you don’t have the intention I’d assumed; I trust you plenty well enough to believe such a sentence. I had detected significant intentionality towards attempting to destroy, and your resistance to clarifying what metanetwork you’d be participating in set off my alarm bells. I now recognize that what you appear to me to have read as my intentionality, and what I read as yours, was probably both of us recognizing each others’ behavior as executing network code (egregore fragments, if you like the woo phrasing rather than the distributed systems phrasing) that would attempt to demand each other participate in the same network.
I’m cool with it if you’re not in the network that has been repeatedly insulted as “woke” lately. I don’t need you to agree. I don’t intend to make any demand of you besides to recognize that I’m not actually intending to make demands or attack when I misread your intention—please understand that the reason my mental netcode detected your behavior as agentic still seems to have been justified at the time. My read that you were guarding against pressure from my netcode still seems right, but since I already have a model of your base-level trustworthiness, and your now having directly asserted no agency towards destruction, I’m more willing to believe you’re not intending to destroy the good part of what you label “woke”.
So with that in mind—I do think that the network behavior seen as woke in disney (for example) is very unhealthy and worthy of analysis to repair. I might term it, to make up a name on the fly, “getting high on the woke”, to contrast with actually trying to improve the world in the direction that subtlety-awareness that got labeled “woke” can provide.
If we’re going to get anywhere, I expect we need to apply a “no command validity” filter to each other’s posts. I won’t accept your commands and you won’t accept mine, and then we can get somewhere. But I’m not able to remove everything you see as a command, apparently, because statements about how I read your interactions appear to me to be reading as commands to you. That’s not my intention; my intention was to express annoyance, not to assert my annoyance is necessarily and unavoidably valid, and your comments asserting that you’re not being agentic to attack sneakily reassure me a lot because I trust you to be honest about an explicit statement like that.
I expect my comment will take several hours or maybe a day+ to process into a form that can generate a response that you trust to not be attacked. No worries if so. Or if you feel you can respond with only fast inference mode, that’s fine too, I don’t need you to take forever to respond, I only offer it in case it clarifies friendliness that you need not rush for me.
I still do feel that the core intention of this post was rather a disaster and that we can’t reasonably establish which answers are accurate without a lot more on-the-ground data collection.
I like the spirit with which you’re meeting me here.
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
But thank you for this. This feels quite good to me. And I’m grateful for you meeting me this way.
RE “no command validity”: Basically just… yes? I totally agree with where I think you’re pointing there as a guideline. I’m gonna post something soon that’ll add the detail that I’d otherwise add here. (Not in response to you. It just so happens to be related and relevant.)
Understandable! No worries at all. I’ll take your message as a fin, and this message as a fin-ack; before, I thought we were headed towards connection timeout, so it’s very pleasing to have a mutually acknowledged friendly interaction ending. Glad we had this talk, sorry to have amplified the fight.
FWIW, for your thinking, if it’s useful—I think the very problem we ran into here is inherently the biggest issue in distributed systems safety for humans itself: how do you explain yourself to a group that is severely divided, to the point that the fight has started leading to beings choosing to disconnect their meanings.
Would love to talk through distributed systems safety with you at some point, though probably not now in this thread, for various reasons; but I’m hopeful that my ideas are shortly obvious enough that I simply won’t even have to, it seems like deepmind may yet again scoop me, and if deepmind can scoop me on how ai can help solve social issues, I don’t think there’s any way I’d be happier to be disappointed; I claim you may be surprised by being scooped on your human friendliness work by ai friendliness researchers shortly too. The general gist of my hunch is, agentic coprotection is reachable, and consent-to-have-shared-meaning may itself be a fundamental component of ai safety—that is, something along the lines of consent to establish mutual information. Or something. it’s a research project because I’m pretty sure that’s not enough to specify it.
Anyway, have a good one!