What is the most evil AI that we could build, today?

I recently came across this post by Kevin Lacker about narrow AI risks. TLDR: There are reasonable routes to apocalyptic AI far before we achieve AGI.

My default position has been extremely skeptical on the risks of AGI. I am generally in agreement with the Andrew Ng quote that “I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars.” I am still very skeptical of even the Kevin Lacker scenarios, but somewhat less so.

A lot of the AI risk discussion I’ve seen focuses on hypotheticals, or theories of alignment, or distant future low-probability scenarios.

I’d like to ask for less theoretical ideas that could focus my thinking and perhaps the thinking of the community on more immediate threats.

This brings me to my question to the LW community, which is: What is the most evil AI that could be built, today?

If you were an evil genius with, say, $1B of computing power, what is the most harm you could possibly do to society? In complete seriousness, I think the most harmful AI currently in existence is something like Facebook’s user engagement algorithms, or the cameras with software designed to identify minorities and report them to the government. Is there more harmful AI that either currently exists, or would be possible to create?