All 8 parts (that I have current plans to write) are now posted, so I’d be interested in your assessment now, after having read them all, of whether the approach outlined in this series is something that should at least be investigated, as a ‘forgotten root’ of the equation.
I remain unconvinced of the feasibility of your approach, and the later posts have done nothing to address my concerns so I don’t have any specific comments on them since they are reasoning about an assumption I am unconvinced of. I think the crux of my thinking this approach can’t work is expressed in this comment, so I think it would require addressing that to potentially change my mind to thinking this is an idea worth spending much time on.
I think there may be something to thinking about killing AIs, but lacking a stronger sense of how this would be accomplished I’m not sure the rest of the ideas matter much since they hinge on that working in particular ways. I’d definitely be interested in reading more about ways we might develop schemes for disabling/killing unaligned AIs, but I think we need a clearer picture of how specifically an AI would be killed.
All 8 parts (that I have current plans to write) are now posted, so I’d be interested in your assessment now, after having read them all, of whether the approach outlined in this series is something that should at least be investigated, as a ‘forgotten root’ of the equation.
I remain unconvinced of the feasibility of your approach, and the later posts have done nothing to address my concerns so I don’t have any specific comments on them since they are reasoning about an assumption I am unconvinced of. I think the crux of my thinking this approach can’t work is expressed in this comment, so I think it would require addressing that to potentially change my mind to thinking this is an idea worth spending much time on.
I think there may be something to thinking about killing AIs, but lacking a stronger sense of how this would be accomplished I’m not sure the rest of the ideas matter much since they hinge on that working in particular ways. I’d definitely be interested in reading more about ways we might develop schemes for disabling/killing unaligned AIs, but I think we need a clearer picture of how specifically an AI would be killed.