In preparing for the consequences of AI it is arguably equally important to flesh out the world that we want as well as the a world that we fear. I acknowledge that as “a powerful tool, AI can equally be used to silence, amplify, or distort the public voice”; however, my interest for this piece to focus on the positive as opposed to the negative case—more on the negative case is in the references. It is almost inevitable that AI will shape public discourse (it already does) - my article aims to discuss the ways that this could be done more fruitfully. It is also notable that everything proposed is either human-in-the-loop or human-on-the-loop and does not put AI directly in control of government as opposed to existing popular proposals [4].
LessWrong is a forum where AI-X risk is one of the main topics.
Proposing to let AI be in control of the governance sounds very risky and your post basically completely ignores all the problems.
In preparing for the consequences of AI it is arguably equally important to flesh out the world that we want as well as the a world that we fear. I acknowledge that as “a powerful tool, AI can equally be used to silence, amplify, or distort the public voice”; however, my interest for this piece to focus on the positive as opposed to the negative case—more on the negative case is in the references. It is almost inevitable that AI will shape public discourse (it already does) - my article aims to discuss the ways that this could be done more fruitfully. It is also notable that everything proposed is either human-in-the-loop or human-on-the-loop and does not put AI directly in control of government as opposed to existing popular proposals [4].