Algo trading is a central example of AI risk

I suspect this observation is far from original, but I didn’t see it explicitly said anywhere, so it seemed worthwhile spelling it out.

The paperclip maximizer is a popular example of unfriendly AI, however it’s not the most realistic one (obviously it wasn’t meant to be). It might be useful to think which applications of AI are the most realistic examples, i.e. which applications are both likely to use state-of-the-art AI and are especially prone to failure modes (which is not to say that other applications are not dangerous). In particular, if AI risk considerations ever make it into policy, such an analysis is one thing that might help to inform it.

One application that stands out is algorithmic trading. Considering the following:

  • Algorithmic trading is obviously lucrative and has strong economic incentives encouraging it.

  • Algorithmic trading has some aspects that are zero-sum, leading to an especially vicious technological race. In this race, there is no natural stopping point of “good enough”: the more powerful your algorithm, the better.

  • Even ignoring the possibility of competing AIs, there is no “good enough” point: acquiring more money is a goal that is either unlimited or, if it has a limit then this limit already requires enough power for a pivotal event.

  • The domain is such that it would be very advantageous for your AI to build detailed models of the world as a whole (at least the human world) and understand how to control it, including in terms of human psychology, economics and technological development. These capabilities are precisely what a pivotal event would require.

  • Algorithmic trading doesn’t require anything close to AGI in order to start paying off. Indeed, it is already a very active domain. This means that transition from subhuman to superhuman intelligence is more likely to happen in a way that is unintended and unplanned, as the algorithm is gradually scaled up, whether in terms of computing power or in other ways.

  • Last but definitely not least, the utility function is exceptionally simple. Formally specifying a “paperclip” might still be complicated, but here we only need something like “the amount of money in a given bank account”. This means that this application requires almost nothing in the way of robust and transparent learning algorithms: sufficiently powerful straightforward reinforcement learning might do absolutely fine. Because of this, an algorithmic trading AI might lack even those safety mechanisms that other applications would require before scaling up to superintelligence.