Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:
Less knowledge about AGI. Because there is less knowledge about AGI than pandemics or climate change, it’s easier to share opinions before feeling ignorant and withdrawing from conversations. This results in more conversations.
A disbelieving public. Implicit in arguments ‘for’ a position is the presumption that many people are ‘against’ that position. That is, believing ‘X is true’ is by itself insufficient to motivate someone to argue for X; someone will only argue for X if they additionally believe others don’t believe X. In the case of AI, perhaps arguments for AI risk are more likely to encounter disagreement than believing in pandemic risk. This encountered disagreement spurs more conversations.
Positive feedback. The more a community reads, thinks, and talks about an issue, the more things they find to say and the more sophisticated their thinking becomes. This begets more conversations on the topic, in a reinforcing feedback loop.
(Disclaimer: I personally don’t worry about AI, am skeptical that AGI will happen in the next 100 years, am skeptical that AGI will take over Earth in under 100 years, but nonetheless recognize that these are more than 0% probable. I don’t have a great mental model of why others disagree, but believe that it can be partly explained by software people being more optimistic than hardware people, since software people have experienced more amazing success in the past couple decades.)
If you think there’s good information about bioengineered pandemics out there, what sources would you recommend?
Multiple LW surveys considered those to be a more likely Xrisk and if there would be a good way to spend Xrisk EA dollar I think it would be likely that the topic would get funding but currently there doesn’t seem to be good targets.
Elephant in the Brain convinced me that many things human say are not to convey information or achieve conscious goals; rather, we say things to signal status and establish social positioning. Here are three hypotheses for why the community focuses on AI that have nothing to do with the probability or impact of AI:
Less knowledge about AGI. Because there is less knowledge about AGI than pandemics or climate change, it’s easier to share opinions before feeling ignorant and withdrawing from conversations. This results in more conversations.
A disbelieving public. Implicit in arguments ‘for’ a position is the presumption that many people are ‘against’ that position. That is, believing ‘X is true’ is by itself insufficient to motivate someone to argue for X; someone will only argue for X if they additionally believe others don’t believe X. In the case of AI, perhaps arguments for AI risk are more likely to encounter disagreement than believing in pandemic risk. This encountered disagreement spurs more conversations.
Positive feedback. The more a community reads, thinks, and talks about an issue, the more things they find to say and the more sophisticated their thinking becomes. This begets more conversations on the topic, in a reinforcing feedback loop.
(Disclaimer: I personally don’t worry about AI, am skeptical that AGI will happen in the next 100 years, am skeptical that AGI will take over Earth in under 100 years, but nonetheless recognize that these are more than 0% probable. I don’t have a great mental model of why others disagree, but believe that it can be partly explained by software people being more optimistic than hardware people, since software people have experienced more amazing success in the past couple decades.)
If you think there’s good information about bioengineered pandemics out there, what sources would you recommend?
Multiple LW surveys considered those to be a more likely Xrisk and if there would be a good way to spend Xrisk EA dollar I think it would be likely that the topic would get funding but currently there doesn’t seem to be good targets.