Talking about examples of uncontrollable complex systems, the grand example is the modern civilization itself (other names: capitalism, Moloch, hyper-modernity, etc.) Which often produces results people don’t want, such as species extinction, ever-rising greenhouse gas emissions, deforestation and soil erosion, shortening human attention span, widespread psychological problems, political division, wars, etc.
E.g. here: https://youtu.be/KCSsKV5F4xc, Daniel Schmachtenberger raises the point that the civilisation is already a “misaligned superintelligence”.
So any A(G)I developers should ask themselves whether the technology and products they develop on balance makes the civilisation itself more or less aligned (relative to the current level) with humans, and preferably publish their thinking.
In principle, it seems that it should be possible in many cases to deploy AI to improve the civilisation—humanity alignment, e.g. by developing innovative AI-first tools for collective epistemics and governance (a-la https://pol.is), better media, AI assistants that help people to build human relationships rather than replace them, etc. Unfortunately, such technology, systems, and products are not particularly favoured by the market and the incumbent political systems themselves.
Talking about examples of uncontrollable complex systems, the grand example is the modern civilization itself (other names: capitalism, Moloch, hyper-modernity, etc.) Which often produces results people don’t want, such as species extinction, ever-rising greenhouse gas emissions, deforestation and soil erosion, shortening human attention span, widespread psychological problems, political division, wars, etc.
E.g. here: https://youtu.be/KCSsKV5F4xc, Daniel Schmachtenberger raises the point that the civilisation is already a “misaligned superintelligence”.
So any A(G)I developers should ask themselves whether the technology and products they develop on balance makes the civilisation itself more or less aligned (relative to the current level) with humans, and preferably publish their thinking.
In principle, it seems that it should be possible in many cases to deploy AI to improve the civilisation—humanity alignment, e.g. by developing innovative AI-first tools for collective epistemics and governance (a-la https://pol.is), better media, AI assistants that help people to build human relationships rather than replace them, etc. Unfortunately, such technology, systems, and products are not particularly favoured by the market and the incumbent political systems themselves.