[Question] Realistic near-future scenarios of AI doom understandable for non-techy people?

I would like to compile a list of the AI doom scenarios that most people (especially politicians) will understand, and will agree that the scenarios are realistic and facts-based. A few examples:

  • There are AIs that help to invent new chemicals. For example, Alphafold helps with designing new proteins. Such AIs can be used to design and improve biological weapons (e.g. by making Ebola even more deadly). The North Korea, ISIS, or even some resourceful loner could use AI to create a virus capable of killing billions of people.

  • Several major countries are working on automating their militaries. For example, a US defense contractor Palantir has announced a system (“Palantir AIP”) that automates a lot of high-level military decision making with AI. China is working on similar systems. If the trend continues, AI will penetrate all levels of the military command. More and more decisions will be delegated to AI. But even the smartest AIs are not error-free. A trusted AI that seems to be reliable in most situations, could make a deadly mistake in an unusual situation. This could lead to all sorts of dangerous scenarios, including an avalanche-like escalation between the American and the Chinese “automatic generals”, greatly increasing the risk of a nuclear war.

  • Social networks, targeted ads, and chatbot trolls are already used to sow division, to promote radical ideologies, and to help dangerous populists to win elections. The smarter are AIs, the more efficient is their usage to manipulate the public opinion. This makes it easier for radical movements to gain traction, and for insane people to become presidents. The next bin Laden and the next Hitler will gain power thanks to AI. And this time it will be much easier for them to develop weapons of mass destruction (e.g. AI-designed bio weapons).

What are some other such scenarios?

No answers.