Well, if the solution to alignment is that a particular system has to keep running in a certain way, then that can fail. The durability of solutions is going to be on a spectrum. What we would hope is that the solution we try to implement is something that improves over time, rather than is permanently brittle.
I think that asking for a perfect solution is asking a lot. It may be possible to perfectly align a superintelligence to human will, but you also want to maintain as much oversight as you can in case you actually got it slightly wrong.
Well, if the solution to alignment is that a particular system has to keep running in a certain way, then that can fail. The durability of solutions is going to be on a spectrum. What we would hope is that the solution we try to implement is something that improves over time, rather than is permanently brittle.
I think that asking for a perfect solution is asking a lot. It may be possible to perfectly align a superintelligence to human will, but you also want to maintain as much oversight as you can in case you actually got it slightly wrong.