Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that’s easily mapped to concepts like “unearned confidence”, the onlooker is more likely to dismiss whatever you’re saying.
It’s literally an invitation to irrelevant philosophical debates about how all technologies are risky and we are still alive and I don’t know how to get out of here without reference to probabilities and expected values.
If that comes up, yes. But then it’s them who have brought up the fact that probability is relevant, so you’re not the one first framing it like that.
This kinda misses greater picture? “Belief that here is a substantial probability of AI killing everyone” is 1000x stronger shibboleth and much easier target for derision.
Hmm. I disagree, not sure exactly why. I think it’s something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it’s framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don’t actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of “effective altruist”, but try to donate to effective charity anyway. That seems like a good thing to me; some people don’t want to be associated with the tribe for some political reason, and if they’re still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of “doomer” or “rationalist”, but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
I am one of those people; I don’t consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.
Oh I agree the main goal is to convince onlookers, and I think the same ideas apply there. If you use language that’s easily mapped to concepts like “unearned confidence”, the onlooker is more likely to dismiss whatever you’re saying.
If that comes up, yes. But then it’s them who have brought up the fact that probability is relevant, so you’re not the one first framing it like that.
Hmm. I disagree, not sure exactly why. I think it’s something like: people focus on short phrases and commonly-used terms more than they focus on ideas. Like how the SSC post I linked gives the example of republicans being just fine with drug legalization as long as it’s framed in right-wing terms. Or how talking positively about eugenics will get you hated, but talking positively about embryo selection and laws against incest will be taken seriously. I suspect that most people don’t actually take positions on ideas at all; they take positions on specific tribal signals that happen to be associated with ideas.
Consider all the people who reject the label of “effective altruist”, but try to donate to effective charity anyway. That seems like a good thing to me; some people don’t want to be associated with the tribe for some political reason, and if they’re still trying to make the world a better place, great! We want something similar to be the case with AI risk; people may reject the labels of “doomer” or “rationalist”, but still think AI is risky, and using more complicated and varied phrases to describe that outcome will make people more open to it.
I am one of those people; I don’t consider myself EA due to its strong association with atheism, but nonetheless am very much on slowing down AGI before it kills us all.