I like the spirit with which you’re meeting me here.
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
But thank you for this. This feels quite good to me. And I’m grateful for you meeting me this way.
RE “no command validity”: Basically just… yes? I totally agree with where I think you’re pointing there as a guideline. I’m gonna post something soon that’ll add the detail that I’d otherwise add here. (Not in response to you. It just so happens to be related and relevant.)
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
Understandable! No worries at all. I’ll take your message as a fin, and this message as a fin-ack; before, I thought we were headed towards connection timeout, so it’s very pleasing to have a mutually acknowledged friendly interaction ending. Glad we had this talk, sorry to have amplified the fight.
FWIW, for your thinking, if it’s useful—I think the very problem we ran into here is inherently the biggest issue in distributed systems safety for humans itself: how do you explain yourself to a group that is severely divided, to the point that the fight has started leading to beings choosing to disconnect their meanings.
Would love to talk through distributed systems safety with you at some point, though probably not now in this thread, for various reasons; but I’m hopeful that my ideas are shortly obvious enough that I simply won’t even have to, it seems like deepmind may yet again scoop me, and if deepmind can scoop me on how ai can help solve social issues, I don’t think there’s any way I’d be happier to be disappointed; I claim you may be surprised by being scooped on your human friendliness work by ai friendliness researchers shortly too. The general gist of my hunch is, agentic coprotection is reachable, and consent-to-have-shared-meaning may itself be a fundamental component of ai safety—that is, something along the lines of consent to establish mutual information. Or something. it’s a research project because I’m pretty sure that’s not enough to specify it.
I like the spirit with which you’re meeting me here.
In all honesty I’m probably not going to respond in detail. That’s just a matter of respecting my time & energy.
But thank you for this. This feels quite good to me. And I’m grateful for you meeting me this way.
RE “no command validity”: Basically just… yes? I totally agree with where I think you’re pointing there as a guideline. I’m gonna post something soon that’ll add the detail that I’d otherwise add here. (Not in response to you. It just so happens to be related and relevant.)
Understandable! No worries at all. I’ll take your message as a fin, and this message as a fin-ack; before, I thought we were headed towards connection timeout, so it’s very pleasing to have a mutually acknowledged friendly interaction ending. Glad we had this talk, sorry to have amplified the fight.
FWIW, for your thinking, if it’s useful—I think the very problem we ran into here is inherently the biggest issue in distributed systems safety for humans itself: how do you explain yourself to a group that is severely divided, to the point that the fight has started leading to beings choosing to disconnect their meanings.
Would love to talk through distributed systems safety with you at some point, though probably not now in this thread, for various reasons; but I’m hopeful that my ideas are shortly obvious enough that I simply won’t even have to, it seems like deepmind may yet again scoop me, and if deepmind can scoop me on how ai can help solve social issues, I don’t think there’s any way I’d be happier to be disappointed; I claim you may be surprised by being scooped on your human friendliness work by ai friendliness researchers shortly too. The general gist of my hunch is, agentic coprotection is reachable, and consent-to-have-shared-meaning may itself be a fundamental component of ai safety—that is, something along the lines of consent to establish mutual information. Or something. it’s a research project because I’m pretty sure that’s not enough to specify it.
Anyway, have a good one!