Hey, I think I’m the target audience of this post, but it really doesn’t resonate well with me.
Here are some of my thoughts/emotions about it:
First, things I agree with:
Feeling doom/depressed/sad isn’t a good idea. I wish I’d have less of this myself :( it isn’t helping anyone
Many people thought the world would end and they were wrong—this should give us a lower prior on ourselves being right about the world maybe ending
Biases are probably in play
I read you as trying to be supportive, which I appreciate, thanks
Things that feel bad about:
5. Was worrying about covid also similarly misguided?
At the beginning of covid, people said “stop making a big deal about it”, “lots of people freaked out about things in the past and were wrong”, I also live close to a city with a lot of (religious) anti-vax people, and I visited a (hippie) place where I told someone when I was last tested and what I did since, but she told me she sees I look good and I better not think about it, I’ll be healthier if I don’t think about covid too much.
Should I have listened to the people who told me to chill about covid? I assume you agree I shouldn’t have, beyond (I agree) no reason to emotionally freak out, just take reasonable actions, that’s all.
What’s the difference between the AI and covid scenarios? I assume you’d say “covid was real” (?). Anyway, that’s my crux. Is AI danger “real” (vaguely defined).
What if the covid deniers would say “if you’re so sure about covid, tell us exactly in what week (or whatever) everyone will die”—this seems like a hard/unfair question. I personally think people spend way too much energy geeking out about AI timelines, and after (metaphorically) a million requests for timelines, a serious team wrote ai-2027, which is a probability distribution, where they think there’s a 50% chance problems will happen earlier, and 50% they’ll happen later (right?). Telling me to chill at 2028 seems similar to asking for an estimate for when covid problems will actually hit, getting a median probability distribution, and telling me to “chill” if that median passed (while still things seem to be getting worse for similar reasons?). Yeah there’s a time where if AI doesn’t seem to be taking over then I’ll eat my hat and say I am completely missing something about how the world works, but that time is not 2028.
Both “sides” have fallacies. Maybe doomers want a way to deal with our mortality. Maybe non-doomers have a bias against imagining changes or whatever. I don’t think this kind of conversation is a good way to figure out what is true. I think AI is a complicated topic that makes it clear how problematic many rules-of-thumb or metaphors are and encourages us to look at the details, or at least to pick our mental shortcuts wisely.
Hey, I think I’m the target audience of this post, but it really doesn’t resonate well with me.
Here are some of my thoughts/emotions about it:
First, things I agree with:
Feeling doom/depressed/sad isn’t a good idea. I wish I’d have less of this myself :( it isn’t helping anyone
Many people thought the world would end and they were wrong—this should give us a lower prior on ourselves being right about the world maybe ending
Biases are probably in play
I read you as trying to be supportive, which I appreciate, thanks
Things that feel bad about:
5. Was worrying about covid also similarly misguided?
At the beginning of covid, people said “stop making a big deal about it”, “lots of people freaked out about things in the past and were wrong”, I also live close to a city with a lot of (religious) anti-vax people, and I visited a (hippie) place where I told someone when I was last tested and what I did since, but she told me she sees I look good and I better not think about it, I’ll be healthier if I don’t think about covid too much.
Should I have listened to the people who told me to chill about covid? I assume you agree I shouldn’t have, beyond (I agree) no reason to emotionally freak out, just take reasonable actions, that’s all.
What’s the difference between the AI and covid scenarios? I assume you’d say “covid was real” (?). Anyway, that’s my crux. Is AI danger “real” (vaguely defined).
What if the covid deniers would say “if you’re so sure about covid, tell us exactly in what week (or whatever) everyone will die”—this seems like a hard/unfair question. I personally think people spend way too much energy geeking out about AI timelines, and after (metaphorically) a million requests for timelines, a serious team wrote ai-2027, which is a probability distribution, where they think there’s a 50% chance problems will happen earlier, and 50% they’ll happen later (right?). Telling me to chill at 2028 seems similar to asking for an estimate for when covid problems will actually hit, getting a median probability distribution, and telling me to “chill” if that median passed (while still things seem to be getting worse for similar reasons?). Yeah there’s a time where if AI doesn’t seem to be taking over then I’ll eat my hat and say I am completely missing something about how the world works, but that time is not 2028.
6. Fallacy Fallacy
I often think back on The Adventures Of Fallacy Man.
Both “sides” have fallacies. Maybe doomers want a way to deal with our mortality. Maybe non-doomers have a bias against imagining changes or whatever. I don’t think this kind of conversation is a good way to figure out what is true. I think AI is a complicated topic that makes it clear how problematic many rules-of-thumb or metaphors are and encourages us to look at the details, or at least to pick our mental shortcuts wisely.