Yet another failed utopia

A DeepMind-like AI is trained and executed on a decentralized supercomputer like the Golem network, perhaps one that also permits microtransactions like the IOTA Tangle.

The algorithm collects texts posted by citizens on public fora and processes them by topic using sentiment analysis algorithms. From actions A1 to An, it takes action Ax which satisfies the following condition:

Find maximum discontent of the relevant kind voiced by citizens under each plan max(A1), …, max(An). Across all plans, find the minimum value among the maximum discontent min(max(A1), …, max(An)). Definition: max(Ax) = min(max(A1), …, max(An)).

Even if such an AI is hard coded to forbid it from preventing free speech on those fora, how could we possibly trust it not to surreptitiously silence citizens in some other way to prevent them from voicing their discontent?