If it helps at all, another data point (not quite answers to your questions):
I’m a complete SI outsider. My exposure to it is entirely indirectly through Less Wrong, which from time to time seems to function as a PR/fundraising/visibility tool for SI.
I have no particular opinion about SI’s arrogance or non-arrogance as an organization, or EY’s arrogance or non-arrogance as an individual. They certainly don’t demonstrate humility, nor do they claim to, but there’s a wide middle ground between the two.
I doubt I would be noticeably more likely to donate money, or to encourage others to donate money, if SI convinced me that it was now 50% less arrogant than it was in 2011.
One thing that significantly lowers my likelihood of donating to SI is my estimate that the expected value of SI’s work is negligible, and that the increase/decrease in that EV based on my donations is even more so. It’s not clear what SI can really do to increase my EV-of-donating, though.
Similar to the comment you quote, someone’s boasts:accomplishments ratio is directly proportional to my estimate that they are crackpots. OTOH, I find it likely that without the boasting and related monkey dynamics, SI would not receive the funding it has today, so it’s not clear that adopting a less boastful stance is actually a good idea from SI’s perspective. (I’m taking as given that SI wants to continue to exist and to increase its funding.)
Just to be clear what I mean by “boasts,” here… throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral. I don’t think that much is at all controversial, but if you really want specific instances I might be motivated to go back through and find some. (Probably not, though.)
EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.
throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.
I cannot think of one example of a claim along those lines.
...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways...
I cannot think of one example of a claim along those lines.
So if I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say ‘Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it.’ And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them. There’s not really much of a motive in a project like this one to have the project split into pieces; whoever can do work on it is likely to work on it together.
ETA
Skimming over the CEV document I see some hints that could explain where the idea comes from that Eliezer believes that he has the wisdom to transform the world:
This seems obvious, until you realize that only the Singularity Institute has even tried to address this issue. [...] Once I acknowledged the problem existed, I didn’t waste time planning the New World Order.
...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways...
I cannot think of one example of a claim along those lines.
The closest I can think of right now is the following quote from Eliezer’s January 2010 video Q&A:
You quoted the context of my statement but edited out the part my reply was based on. Don’t do that.
and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.
The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those “impossible” transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.
Thanks for clarifying what part of my statement you were objecting to.
Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various “failed utopias”), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren’t wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.
I recall a long wave of exchanges of the form “Show us some code!” “You know, I could show you code… it’s not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven’t made enough progress in philosophy and decision theory to do it safely.”
But looking at your clarification, I will admit I got sloppy in my formulation, given that that’s only one example (albeit a pervasive one). What I should have said was “throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral.”
“You know, I could show you code… it’s not that hard a problem, really,
I’d actually be very surprised if Eliezer had ever said that—since it is plainly wrong and as far as I know Eliezer isn’t quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of ‘impossible’. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.
If Eliezer thought AI wasn’t a hard problem he wouldn’t be comfortable dismissing (particular isntances of) AI researchers who don’t care about friendliness as “Mostly Harmless”!
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
Incidentally, I didn’t claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don’t really have an opinion about EY’s supposed arrogance. Neither do I think it especially important.
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.
The flip side is that if Eliezer has actually claimed that it isn’t a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer’s credibility in my eyes.
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!
OK, that’s clear. I don’t know if I’ll bother to do the research to confirm one way or the other, but in either case your confidence that I’m misremembering has reduced my confidence in my recollection.
Yeah I remember that and it was certainly a megalomaniacal slip.
But I do not agree that arrogant is the correct term. I suspect “arrogant” may be a brief and inaccurate substitute for: “unappealing, but I cannot be bothered to come up with anything specific”. In my dictionaries (I checked Merriam-Webster and American Heritage), arrogant is necessarily overbearing. If you are clicking on their website or reading their literature or attending their public function there isn’t any easy way for them to overbear upon you.
When Terrel Owens does a touchdown dance in the endzone and the cameras are on him for fifteen seconds until the next play your attention is under his thumb and he is being arrogant. Eliezer’s little slip of on-webcam megalomania is not arrogant. It would be arrogant if he was running for public office and he said that in a debate and the voters felt they had to watch it, but not when the viewer has surfed to that information and getting away is free of any cost and as easy as a click.
Almost all of us do megalomaniacal stuff all the time when nobody is looking and almost all of us expend some deliberate effort trying to not do it when people are looking.
If it helps at all, another data point (not quite answers to your questions):
I’m a complete SI outsider. My exposure to it is entirely indirectly through Less Wrong, which from time to time seems to function as a PR/fundraising/visibility tool for SI.
I have no particular opinion about SI’s arrogance or non-arrogance as an organization, or EY’s arrogance or non-arrogance as an individual. They certainly don’t demonstrate humility, nor do they claim to, but there’s a wide middle ground between the two.
I doubt I would be noticeably more likely to donate money, or to encourage others to donate money, if SI convinced me that it was now 50% less arrogant than it was in 2011.
One thing that significantly lowers my likelihood of donating to SI is my estimate that the expected value of SI’s work is negligible, and that the increase/decrease in that EV based on my donations is even more so. It’s not clear what SI can really do to increase my EV-of-donating, though.
Similar to the comment you quote, someone’s boasts:accomplishments ratio is directly proportional to my estimate that they are crackpots. OTOH, I find it likely that without the boasting and related monkey dynamics, SI would not receive the funding it has today, so it’s not clear that adopting a less boastful stance is actually a good idea from SI’s perspective. (I’m taking as given that SI wants to continue to exist and to increase its funding.)
Just to be clear what I mean by “boasts,” here… throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral. I don’t think that much is at all controversial, but if you really want specific instances I might be motivated to go back through and find some. (Probably not, though.)
I am not impressed by those sorts of ploys.
I cannot think of one example of a claim along those lines.
The closest I can think of right now is the following quote from Eliezer’s January 2010 video Q&A:
ETA
Skimming over the CEV document I see some hints that could explain where the idea comes from that Eliezer believes that he has the wisdom to transform the world:
You quoted the context of my statement but edited out the part my reply was based on. Don’t do that.
The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those “impossible” transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.
Thanks for clarifying what part of my statement you were objecting to.
Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various “failed utopias”), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren’t wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.
I recall a long wave of exchanges of the form “Show us some code!” “You know, I could show you code… it’s not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven’t made enough progress in philosophy and decision theory to do it safely.”
But looking at your clarification, I will admit I got sloppy in my formulation, given that that’s only one example (albeit a pervasive one). What I should have said was “throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral.”
I’d actually be very surprised if Eliezer had ever said that—since it is plainly wrong and as far as I know Eliezer isn’t quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of ‘impossible’. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.
If Eliezer thought AI wasn’t a hard problem he wouldn’t be comfortable dismissing (particular isntances of) AI researchers who don’t care about friendliness as “Mostly Harmless”!
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
Incidentally, I didn’t claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don’t really have an opinion about EY’s supposed arrogance. Neither do I think it especially important.
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.
The flip side is that if Eliezer has actually claimed that it isn’t a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer’s credibility in my eyes.
OK, cool.
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!
OK, that’s clear. I don’t know if I’ll bother to do the research to confirm one way or the other, but in either case your confidence that I’m misremembering has reduced my confidence in my recollection.
My apologies, it wasn’t my intention to do that. Careless oversight.
Yeah I remember that and it was certainly a megalomaniacal slip.
But I do not agree that arrogant is the correct term. I suspect “arrogant” may be a brief and inaccurate substitute for: “unappealing, but I cannot be bothered to come up with anything specific”. In my dictionaries (I checked Merriam-Webster and American Heritage), arrogant is necessarily overbearing. If you are clicking on their website or reading their literature or attending their public function there isn’t any easy way for them to overbear upon you.
When Terrel Owens does a touchdown dance in the endzone and the cameras are on him for fifteen seconds until the next play your attention is under his thumb and he is being arrogant. Eliezer’s little slip of on-webcam megalomania is not arrogant. It would be arrogant if he was running for public office and he said that in a debate and the voters felt they had to watch it, but not when the viewer has surfed to that information and getting away is free of any cost and as easy as a click.
Almost all of us do megalomaniacal stuff all the time when nobody is looking and almost all of us expend some deliberate effort trying to not do it when people are looking.
OK; I stand corrected about the controversiality.