Yeah “esoteric” perhaps isn’t the best word. What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
but “extremely unlikely” seems like an overstatement[...]
What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
Yeah, that makes sense. I guess I’ve been using “illegible” for a similar purpose, but maybe that’s not a great word either, because that also seems to imply “hard to understand” but again it seems like these problems I’ve been writing about are not that hard to understand.
I wish I knew what is causing people to ignore these issues, including people in rationality/EA (e.g. the most famous rationalists have said little on them). I may be slowly growing an audience, e.g. Will MacAskill invited me to do a podcast with his org, and Jan Kulveit just tweeted “@weidai11
is completely right about the risk we won’t be philosophically competent enough in time”, but it’s inexplicable to me how slow it has been, compared to something like UDT which instantly became “the talk of the town” among rationalists.
Pretty plausible that the same underlying mechanism is also causing the general public to not take “AI could kill us all” very seriously, and I wish I understood that better as well.
Yeah “esoteric” perhaps isn’t the best word. What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
Yes this is fair.
Yeah, that makes sense. I guess I’ve been using “illegible” for a similar purpose, but maybe that’s not a great word either, because that also seems to imply “hard to understand” but again it seems like these problems I’ve been writing about are not that hard to understand.
I wish I knew what is causing people to ignore these issues, including people in rationality/EA (e.g. the most famous rationalists have said little on them). I may be slowly growing an audience, e.g. Will MacAskill invited me to do a podcast with his org, and Jan Kulveit just tweeted “@weidai11 is completely right about the risk we won’t be philosophically competent enough in time”, but it’s inexplicable to me how slow it has been, compared to something like UDT which instantly became “the talk of the town” among rationalists.
Pretty plausible that the same underlying mechanism is also causing the general public to not take “AI could kill us all” very seriously, and I wish I understood that better as well.