Are there any specific examples of anybody working on AI tools that autonomously look for new domains to optimize over?
If no, then doesn’t the path to doom still amount to a human choosing to apply their software to some new and unexpectedly lethal domain or giving the software real-world capabilities with unexpected lethal consequences? So then, shouldn’t that be a priority for AI safety efforts?
If yes, then maybe we should have a conversation about which of these projects is most likely to bootstrap itself, and the likely paths it will take?
Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can’t imagine harming us at scale. So maybe that’s what we should try harder to imagine and prevent.
Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified?
Being wiped out in a heartbeat by some nano-Cthulu in pursuit of some inscrutable goal that nobody genuinely saw coming
Being killed even before that by whatever is the most lethal thing you can imagine evolving from existing ad-click maximizers, bitcoin maximizers, up-vote maximizers, (oh, and military drones, those are kind of lethal) etc. because they seemed like too mundane a threat