I really enjoyed reading your analysis, especially as someone who’s probably younger than many users here; I was born the same year this war started.
Anyway, my question for you is this. You state that
“If there’s some non-existential AI catastrophe (even on the scale of 9/11), it might open a policy window to responses that seem extreme and that aren’t just direct obvious responses to the literal bad thing that occurred. E.g. maybe an extreme misuse event could empower people who are mostly worried about an intelligence explosion and AI takeover.”
I’ve done thought experiments and scenarios in sandbox environments with many SOTA AI models, and I try to read a lot of Safety literature (Nick Bostrom’s 2014 Superintelligence comes to mind, it’s one of my favorites). My question has to do with what you think the most “likely” non-existential AI risk is? I’m of the opinion that persuasion is the biggest non-existential AI risk, both due to psychopancy and also manipulation of consumer and voting habits.
Do you agree or is there a different angle you see for non-existential AI risk?
Good evening.
I really enjoyed reading your analysis, especially as someone who’s probably younger than many users here; I was born the same year this war started.
Anyway, my question for you is this. You state that
“If there’s some non-existential AI catastrophe (even on the scale of 9/11), it might open a policy window to responses that seem extreme and that aren’t just direct obvious responses to the literal bad thing that occurred. E.g. maybe an extreme misuse event could empower people who are mostly worried about an intelligence explosion and AI takeover.”
I’ve done thought experiments and scenarios in sandbox environments with many SOTA AI models, and I try to read a lot of Safety literature (Nick Bostrom’s 2014 Superintelligence comes to mind, it’s one of my favorites). My question has to do with what you think the most “likely” non-existential AI risk is? I’m of the opinion that persuasion is the biggest non-existential AI risk, both due to psychopancy and also manipulation of consumer and voting habits.
Do you agree or is there a different angle you see for non-existential AI risk?