I’m weakly betting this has more to do with the genre or style you presented as.
I talked to my mom about it, and I’m not sure what she ended up exactly believing but like jimmy, it went pretty different, I think she ended up something like “not 100% sure what to believe but I believe my son believes and it seems at least reasonable.”
I think my dad ended up believing something like “I don’t really buy everything my son is saying” (more actively skeptical than my mom), but probably something like “there’s something real here, even if I think my son is wrong about some things.”
(In both cases I wasn’t trying to persuade them, so much as say ‘hey, I am your son and this is what’s real for me these days, and, I want you to know that’).
When I talked to my aunt, and cousin, I basically showed them the cover of “If Anyone Builds It”, and said “people right now are trying to build AI that is smarter than humans, and it seems like it’s working. This book is arguing that if they succeeds, it would end up killing everyone, for pretty similar reasons to why the last time something ended up smarter than the rest of the ecosystem (humans) it caused a lot of extinctions – we just didn’t care that much about other animals and steamroll over things.”
And my aunt and cousin were both just like “oh, huh. Yeah, that makes sense. That, uh, seems really worrying. I am worried now.”
I think leaning on the “humans have caused a lot of extinction, because we are smarter than the rest of the ecosystem and don’t really care about most species” is pretty straightforward with left-leaning types. I haven’t tried it with more right-leaning types.
I think a lot of people can just sorta sense “man, something is going on with AI that is kinda crazy and scary.”
I think it’s only with nerds that it makes sense to get into a lot of the argument depth. I think people have a (correct) immune reaction to things that sound like complicated arguments. But I think the basic argument for AI x-risk is pretty simple, and it’s only when people are sophisticated enough to have complicated objections that it’s particularly useful to get into the deeper arguments.
(Wherein I’d start with “okay, so, yeah there are a lot of reasonable objections, the core argument is pretty simple, and I think there are pretty good counterarguments to the objections I’ve heard. But, if you want to really get into it, it’ll get complicated, but, I’m down to get into the details if you want to talk through them”)
I didn’t actually struggle to convince my mom overall, I just noticed some specific things I said triggered transient skepticism in a way that wasn’t necessary, because I said things to her that were popular in AI circles but sound crazy to normal people. This post was supposed to be a warning to people that those things can sound crazy, and that maybe they’re best avoided.
I think everything you say about people having a sense something weird is happening with AI, and starting by sharing your perspective rather than trying to persuade, that’s all well put and I agree. And before bringing up anything that sounds crazy, priming them by saying “this next part is gonna sound crazy/complicated” or something like that is a good idea.
I’m weakly betting this has more to do with the genre or style you presented as.
I talked to my mom about it, and I’m not sure what she ended up exactly believing but like jimmy, it went pretty different, I think she ended up something like “not 100% sure what to believe but I believe my son believes and it seems at least reasonable.”
I think my dad ended up believing something like “I don’t really buy everything my son is saying” (more actively skeptical than my mom), but probably something like “there’s something real here, even if I think my son is wrong about some things.”
(In both cases I wasn’t trying to persuade them, so much as say ‘hey, I am your son and this is what’s real for me these days, and, I want you to know that’).
When I talked to my aunt, and cousin, I basically showed them the cover of “If Anyone Builds It”, and said “people right now are trying to build AI that is smarter than humans, and it seems like it’s working. This book is arguing that if they succeeds, it would end up killing everyone, for pretty similar reasons to why the last time something ended up smarter than the rest of the ecosystem (humans) it caused a lot of extinctions – we just didn’t care that much about other animals and steamroll over things.”
And my aunt and cousin were both just like “oh, huh. Yeah, that makes sense. That, uh, seems really worrying. I am worried now.”
I think leaning on the “humans have caused a lot of extinction, because we are smarter than the rest of the ecosystem and don’t really care about most species” is pretty straightforward with left-leaning types. I haven’t tried it with more right-leaning types.
I think a lot of people can just sorta sense “man, something is going on with AI that is kinda crazy and scary.”
I think it’s only with nerds that it makes sense to get into a lot of the argument depth. I think people have a (correct) immune reaction to things that sound like complicated arguments. But I think the basic argument for AI x-risk is pretty simple, and it’s only when people are sophisticated enough to have complicated objections that it’s particularly useful to get into the deeper arguments.
(Wherein I’d start with “okay, so, yeah there are a lot of reasonable objections, the core argument is pretty simple, and I think there are pretty good counterarguments to the objections I’ve heard. But, if you want to really get into it, it’ll get complicated, but, I’m down to get into the details if you want to talk through them”)
I didn’t actually struggle to convince my mom overall, I just noticed some specific things I said triggered transient skepticism in a way that wasn’t necessary, because I said things to her that were popular in AI circles but sound crazy to normal people. This post was supposed to be a warning to people that those things can sound crazy, and that maybe they’re best avoided.
I think everything you say about people having a sense something weird is happening with AI, and starting by sharing your perspective rather than trying to persuade, that’s all well put and I agree. And before bringing up anything that sounds crazy, priming them by saying “this next part is gonna sound crazy/complicated” or something like that is a good idea.