Could you point out the main things that give the impression that I’m presuming utility function -based decision making?
I am not sure what other AGI designs exist, other than utility function based decision makers, where it would make sense to talk about “friendly” and “unfriendly” goal architectures. If we’re talking about behavior executors or AGI designs with malleable goals, then we’re talking about hardcoded tools in the former case and unpredictable systems in the latter case, no?
I am not sure what other AGI designs exist, other than utility function based decision makers, where it would make sense to talk about “friendly” and “unfriendly” goal architectures. If we’re talking about behavior executors or AGI designs with malleable goals, then we’re talking about hardcoded tools in the former case and unpredictable systems in the latter case, no?