Sure, and if you think that balance of successful / not-successful predictions means it makes sense to try to predict the future psychology of AIs on its basis, go for it.
But do so because you think it has a pretty good predictive record, not because there aren’t any other theories. If it has a bad predictive record then Rationality and Law doesn’t say “Well, if it’s the best you have, go for it,” but “Cast around for a less falsified theory, generate intuitions, don’t just use a hammer to fix your GPU because it’s the only tool you have.”
(Separately I do think that it is VNM + a bucket of other premises that lead generally towards extinction, not just VNM).
Sure, and if you think that balance of successful / not-successful predictions means it makes sense to try to predict the future psychology of AIs on its basis, go for it.
But do so because you think it has a pretty good predictive record, not because there aren’t any other theories. If it has a bad predictive record then Rationality and Law doesn’t say “Well, if it’s the best you have, go for it,” but “Cast around for a less falsified theory, generate intuitions, don’t just use a hammer to fix your GPU because it’s the only tool you have.”
(Separately I do think that it is VNM + a bucket of other premises that lead generally towards extinction, not just VNM).