I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
This is the message I missed inferring from your original reply. Yes, I concur we’re in agreement.
I would.