I’d wondered why you wrote so many pieces advising people to be cautious about more esoteric problems arising from AI,
Interesting that you have this impression, whereas I’ve been thinking of myself recently as doing a “breadth first search” to uncover high level problems that others seem to have missed or haven’t bothered to write down. I feel like my writings in the last few years are pretty easy to understand without any specialized knowledge (whereas Google says “esoteric” is defined as “intended for or likely to be understood by only a small number of people with a specialized knowledge or interest”).
If on reflection you still think “esoteric” is right, I’d be interested in an expansion on this, e.g. which of the problems I’ve discussed seem esoteric to you and why.
to an extent that seemed extremely unlikely to be implemented in the real world
It doesn’t look like humanity is on track to handle these problems, but “extremely unlikely” seems like an overstatement. I think there’s still some paths where we handle these problems better, including 1) warning shots or political wind shift cause an AI pause/stop to be implemented, during which some of these problems/ideas are popularized or rediscovered 2) future AI advisors are influenced by my writings or are strategically competent enough to realize these same problems and help warn/convince their principals.
I also have other motivations including:
status—Recognition even among a small group can be highly motivating for humans.
intellectual curiosity—Think of it as “theoretical Singularity strategic studies”. Sure seems more interesting than many other intellectual puzzles that people pursue.
dignity—Even if a few humans can see things more clearly, that’s more dignified than going into the AI transition completely blind.
Yeah “esoteric” perhaps isn’t the best word. What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
but “extremely unlikely” seems like an overstatement[...]
What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
Yeah, that makes sense. I guess I’ve been using “illegible” for a similar purpose, but maybe that’s not a great word either, because that also seems to imply “hard to understand” but again it seems like these problems I’ve been writing about are not that hard to understand.
I wish I knew what is causing people to ignore these issues, including people in rationality/EA (e.g. the most famous rationalists have said little on them). I may be slowly growing an audience, e.g. Will MacAskill invited me to do a podcast with his org, and Jan Kulveit just tweeted “@weidai11
is completely right about the risk we won’t be philosophically competent enough in time”, but it’s inexplicable to me how slow it has been, compared to something like UDT which instantly became “the talk of the town” among rationalists.
Pretty plausible that the same underlying mechanism is also causing the general public to not take “AI could kill us all” very seriously, and I wish I understood that better as well.
Interesting that you have this impression, whereas I’ve been thinking of myself recently as doing a “breadth first search” to uncover high level problems that others seem to have missed or haven’t bothered to write down. I feel like my writings in the last few years are pretty easy to understand without any specialized knowledge (whereas Google says “esoteric” is defined as “intended for or likely to be understood by only a small number of people with a specialized knowledge or interest”).
If on reflection you still think “esoteric” is right, I’d be interested in an expansion on this, e.g. which of the problems I’ve discussed seem esoteric to you and why.
It doesn’t look like humanity is on track to handle these problems, but “extremely unlikely” seems like an overstatement. I think there’s still some paths where we handle these problems better, including 1) warning shots or political wind shift cause an AI pause/stop to be implemented, during which some of these problems/ideas are popularized or rediscovered 2) future AI advisors are influenced by my writings or are strategically competent enough to realize these same problems and help warn/convince their principals.
I also have other motivations including:
status—Recognition even among a small group can be highly motivating for humans.
intellectual curiosity—Think of it as “theoretical Singularity strategic studies”. Sure seems more interesting than many other intellectual puzzles that people pursue.
dignity—Even if a few humans can see things more clearly, that’s more dignified than going into the AI transition completely blind.
Yeah “esoteric” perhaps isn’t the best word. What I had in mind is that they’re relatively more esoteric than “AI could kill us all” and yet it’s pretty hard to get people to take even that seriously! “Low-propensity-to-persuade-people” maybe?
Yes this is fair.
Yeah, that makes sense. I guess I’ve been using “illegible” for a similar purpose, but maybe that’s not a great word either, because that also seems to imply “hard to understand” but again it seems like these problems I’ve been writing about are not that hard to understand.
I wish I knew what is causing people to ignore these issues, including people in rationality/EA (e.g. the most famous rationalists have said little on them). I may be slowly growing an audience, e.g. Will MacAskill invited me to do a podcast with his org, and Jan Kulveit just tweeted “@weidai11 is completely right about the risk we won’t be philosophically competent enough in time”, but it’s inexplicable to me how slow it has been, compared to something like UDT which instantly became “the talk of the town” among rationalists.
Pretty plausible that the same underlying mechanism is also causing the general public to not take “AI could kill us all” very seriously, and I wish I understood that better as well.