[Question] What could small scale disasters from AI look like?

If an AI system or systems goes wrong in the near term and causes harm to humans in a way which is consistent with or supportive of alignment being a big deal, what might it look like?

I’m asking because I’m curious about potential fire-alarm scenarios (including things which just help to make AI risks salient to the wider public), and also looking to operationalise a forecasting question which is currently drafted as

By 2032, will we see an event precipitated by AI that causes at least 100 deaths and/​or at least $1B 2021 USD in economic damage?

to allow a clear and sensible resolution.