what changes would need to be made to the computing environment and software design, in order for the posse efficiency to be high enough to intimidate AIs into being polite with each other?
I haven’t read the next part yet, so consider this a pre-registration that I suspect it is more likely than not that you will not convince me we can meaningfully do something to effect the situation needed (though it might happen anyway, just not because we made it happen). I look forward to finding out if you prove my suspicions wrong.
All 8 parts (that I have current plans to write) are now posted, so I’d be interested in your assessment now, after having read them all, of whether the approach outlined in this series is something that should at least be investigated, as a ‘forgotten root’ of the equation.
I remain unconvinced of the feasibility of your approach, and the later posts have done nothing to address my concerns so I don’t have any specific comments on them since they are reasoning about an assumption I am unconvinced of. I think the crux of my thinking this approach can’t work is expressed in this comment, so I think it would require addressing that to potentially change my mind to thinking this is an idea worth spending much time on.
I think there may be something to thinking about killing AIs, but lacking a stronger sense of how this would be accomplished I’m not sure the rest of the ideas matter much since they hinge on that working in particular ways. I’d definitely be interested in reading more about ways we might develop schemes for disabling/killing unaligned AIs, but I think we need a clearer picture of how specifically an AI would be killed.
I haven’t read the next part yet, so consider this a pre-registration that I suspect it is more likely than not that you will not convince me we can meaningfully do something to effect the situation needed (though it might happen anyway, just not because we made it happen). I look forward to finding out if you prove my suspicions wrong.
All 8 parts (that I have current plans to write) are now posted, so I’d be interested in your assessment now, after having read them all, of whether the approach outlined in this series is something that should at least be investigated, as a ‘forgotten root’ of the equation.
I remain unconvinced of the feasibility of your approach, and the later posts have done nothing to address my concerns so I don’t have any specific comments on them since they are reasoning about an assumption I am unconvinced of. I think the crux of my thinking this approach can’t work is expressed in this comment, so I think it would require addressing that to potentially change my mind to thinking this is an idea worth spending much time on.
I think there may be something to thinking about killing AIs, but lacking a stronger sense of how this would be accomplished I’m not sure the rest of the ideas matter much since they hinge on that working in particular ways. I’d definitely be interested in reading more about ways we might develop schemes for disabling/killing unaligned AIs, but I think we need a clearer picture of how specifically an AI would be killed.