This is an interesting question. Without commenting on whether I think the approach would work (which if it does, would be a great thing!), it does not address the “anyone” dimension of IABIED. In other words—one reason to try to build long-term consequentialist AI (if we think we can do so safely) would be to try to prevent anyone else from doing so unsafely.
This is an interesting question. Without commenting on whether I think the approach would work (which if it does, would be a great thing!), it does not address the “anyone” dimension of IABIED. In other words—one reason to try to build long-term consequentialist AI (if we think we can do so safely) would be to try to prevent anyone else from doing so unsafely.