AI’s without utility functions, but some other motivational structure, will tend to self-improve to a utility function AI. Utility-function AI’s seem more stable under self-improvement, but there are many reasons it might want to change its utility (eg speed of access, multi-agent situations).
AI’s without utility functions, but some other motivational structure, will tend to self-improve to a utility function AI. Utility-function AI’s seem more stable under self-improvement, but there are many reasons it might want to change its utility (eg speed of access, multi-agent situations).
Could you clarify what you mean by an “other motivational structure?” Something with preference non-transitivity?
For instance. http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf