I agree that one could do something similar with other tech than neat biotech, but I don’t think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. It’s questionable whether the latter is even possible and unclear what it means to achieve it.
I guess but that’s not minimal and doesn’t add much.
“how do we make an ASI create a nice (highly advanced technology) instead of a bad (same)?”.
IE: kudzugoth vs robots vs (self propagating change to basic physics)
Put differently:
If we build a thing that can make highly advanced technology, make it help rather than kill us with that technology.
Neat biotech is one such technology but not a special case.
Aligning the AI is a problem mostly independent of what the AI is doing (unless you’re building special purpose non AGI models as mentioned above)
I agree that one could do something similar with other tech than neat biotech, but I don’t think this proves that Kudzugoth Alignment is as difficult as general alignment. I think aligning AI to achieve something specific is likely to be a lot easier than aligning AI in general. It’s questionable whether the latter is even possible and unclear what it means to achieve it.