Hello Scott! You might be interested in my proposals for AI goal structures that are designed to be robust to scale:
Using homeostasis-based goal structures:
https://medium.com/threelaws/making-ai-less-dangerous-2742e29797bd
and
Permissions-then-goals based AI user “interfaces” + legal accountability:
Hello! My newest proposal:
https://medium.com/threelaws/making-ai-less-dangerous-2742e29797bd
I would like to propose a certain kind of AI goal structures that would be an alternative to utility maximisation based goal structures. The proposed alternative framework would make AI significantly safer, though it would not guarantee total safety. It can be used at strong AI level and also much below, so it is well scalable. The main idea would be to replace utility maximisation with the concept of homeostasis.