This is worrying—it seems plausible to me that there isn’t a “correct” rationality or intelligence algorithm (even in the infinite compute case), but that we wouldn’t realize this because people who believe that also wouldn’t want to work on AI alignment.
At least when it comes to the “friendly values” part of rationality, I’m very much on the “find an adequate solution” side https://www.lesswrong.com/posts/Y2LhX3925RodndwpC/resolving-human-values-completely-and-adequately .