Are there any specific examples of anybody working on AI tools that autonomously look for new domains to optimize over?
If no, then doesn’t the path to doom still amount to a human choosing to apply their software to some new and unexpectedly lethal domain or giving the software real-world capabilities with unexpected lethal consequences? So then, shouldn’t that be a priority for AI safety efforts?
If yes, then maybe we should have a conversation about which of these projects is most likely to bootstrap itself, and the likely paths it will take?
Are there any specific examples of anybody working on AI tools that autonomously look for new domains to optimize over?
If no, then doesn’t the path to doom still amount to a human choosing to apply their software to some new and unexpectedly lethal domain or giving the software real-world capabilities with unexpected lethal consequences? So then, shouldn’t that be a priority for AI safety efforts?
If yes, then maybe we should have a conversation about which of these projects is most likely to bootstrap itself, and the likely paths it will take?