How much of AI alignment and safety has been informed at all by economics?
Part of the background to my question relates to the paperclip maximizer story. I could be poorly understanding the problem suggested by that issue (have not read the original) but to me it largely screams economic system failure.
Yes, a lot of it has been informed by economics. Some authors emphasize the relation, others de-emphasize it.
The relation goes beyond alignment and safety research. The way in which modern ML research defines its metric of AI agent intelligence is directly based on utility theory, which was developed by Von Neumann and Morgenstern to describe games and economic behaviour.
How much of AI alignment and safety has been informed at all by economics?
Part of the background to my question relates to the paperclip maximizer story. I could be poorly understanding the problem suggested by that issue (have not read the original) but to me it largely screams economic system failure.
Yes, a lot of it has been informed by economics. Some authors emphasize the relation, others de-emphasize it.
The relation goes beyond alignment and safety research. The way in which modern ML research defines its metric of AI agent intelligence is directly based on utility theory, which was developed by Von Neumann and Morgenstern to describe games and economic behaviour.