I’d bet we’re going to figure out how to make an omohundro optimiser—a fitness-maximizing AGI—before we figure out how to make AGI that can rescue the utility function, preserve a goal, or significantly optimise any metric other than its own survival, such as paperclip production, or Good.
(Arguing for that is a bit beyond the scope of the question, but I know this position has a lot of support already. I’ve heard Eliezer say, if not this exactly, something very similar. Nick Land especially believes that only the omohundro drives could animate self-improving AGI. I don’t think Nick Land understands how agency needs to intercede in prediction—that it needs to consider all of the competing self-fulfilling prophesies and only profess the prophesy it really wants to live in, instead of immediately siding with the prophesy that seems the most hellish, and most easiest to stumble into. The prophesies he tends to choose do seem like the easiest prophesies to stumble into, so he provides a useful service as a hazard alarm, for we who are trying to learn not to stumble))
What would you advise we do, when one of us finds ourselves in the position of knowing how to build an omohundro optimiser? Delete the code and forget it?
If we had a fitness-optimising program, is there anything good it could be used for?