...throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways...
I cannot think of one example of a claim along those lines.
The closest I can think of right now is the following quote from Eliezer’s January 2010 video Q&A:
You quoted the context of my statement but edited out the part my reply was based on. Don’t do that.
and holding back from doing so only because he possesses the unusual wisdom to realize that doing so is immoral.
The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those “impossible” transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.
Thanks for clarifying what part of my statement you were objecting to.
Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various “failed utopias”), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren’t wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.
I recall a long wave of exchanges of the form “Show us some code!” “You know, I could show you code… it’s not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven’t made enough progress in philosophy and decision theory to do it safely.”
But looking at your clarification, I will admit I got sloppy in my formulation, given that that’s only one example (albeit a pervasive one). What I should have said was “throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral.”
“You know, I could show you code… it’s not that hard a problem, really,
I’d actually be very surprised if Eliezer had ever said that—since it is plainly wrong and as far as I know Eliezer isn’t quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of ‘impossible’. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.
If Eliezer thought AI wasn’t a hard problem he wouldn’t be comfortable dismissing (particular isntances of) AI researchers who don’t care about friendliness as “Mostly Harmless”!
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
Incidentally, I didn’t claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don’t really have an opinion about EY’s supposed arrogance. Neither do I think it especially important.
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.
The flip side is that if Eliezer has actually claimed that it isn’t a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer’s credibility in my eyes.
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!
OK, that’s clear. I don’t know if I’ll bother to do the research to confirm one way or the other, but in either case your confidence that I’m misremembering has reduced my confidence in my recollection.
You quoted the context of my statement but edited out the part my reply was based on. Don’t do that.
The very quote of Eliezer that you supply in the parent demonstrates the Eliezer presents himself as actually trying to do those “impossible” transformations, not refraining from doing them for moral reasons. That part just comes totally out of left field and since it is presented as a conjunction the whole thing just ends up false.
Thanks for clarifying what part of my statement you were objecting to.
Mostly what I was thinking of on that side was the idea that actually building a powerful AI, or even taking tangible steps that make the problem of building a powerful AI easier, would result in the destruction of the world (or, at best, the creation of various “failed utopias”), and therefore the moral thing to do (which most AI researchers, to say nothing of lesser mortals, aren’t wise enough to realize is absolutely critical) is to hold off on that stuff and instead work on moral philosophy and decision theory.
I recall a long wave of exchanges of the form “Show us some code!” “You know, I could show you code… it’s not that hard a problem, really, for one with the proper level of vampiric aura, once the one understands the powerful simplicity of the Bayes-structure of the entire universe and finds something to protect important enough to motivate the one to shut up and do the impossible. But it would be immoral for me to write AI code right now, because we haven’t made enough progress in philosophy and decision theory to do it safely.”
But looking at your clarification, I will admit I got sloppy in my formulation, given that that’s only one example (albeit a pervasive one). What I should have said was “throughout the sequences EY frequently presents himself as possessing the intellectual horsepower and insight to transform the world in “impossible” ways, one obvious tangible expression of which (that is, actual AI design) he holds back from creating only because he possesses the unusual wisdom to realize that doing so is immoral.”
I’d actually be very surprised if Eliezer had ever said that—since it is plainly wrong and as far as I know Eliezer isn’t quite that insane. I can imagine him saying that it is (probably) an order of magnitude easier than making the coded AI friendly but that is still just placing it simpler on a scale of ‘impossible’. Eliezer says many things that qualify for the label arrogant but I doubt this is one of them.
If Eliezer thought AI wasn’t a hard problem he wouldn’t be comfortable dismissing (particular isntances of) AI researchers who don’t care about friendliness as “Mostly Harmless”!
What I wrote was “it’s not that hard a problem, really, for one with (list of qualifications most people don’t have),” which is importantly different from what you quote.
Incidentally, I didn’t claim it was arrogant. I claimed it was a boast, and I brought boasts up in the context of judging whether someone is a crackpot. I explicitly said, and I repeat here, that I don’t really have an opinion about EY’s supposed arrogance. Neither do I think it especially important.
I extend my denial to the full list. I do not believe Eliezer has made the claim that you allege he has made, even with the list of qualifications. It would be a plainly wrong claim and I believe you have made a mistake in your recollection.
The flip side is that if Eliezer has actually claimed that it isn’t a hard problem (with the list of qualifications) then I assert that said claim significantly undermines Eliezer’s credibility in my eyes.
OK, cool.
Do you also still maintain that if he thought it wasn’t a hard problem for people with the right qualifications, he wouldn’t be comfortable dismissing particular instances of AI researchers as mostly harmless?
Yes. And again if Eliezer did consider the problem easy with qualifications but still dismissed the aforementioned folks as mostly harmless it would constitute dramatically enhanced boastful arrogance!
OK, that’s clear. I don’t know if I’ll bother to do the research to confirm one way or the other, but in either case your confidence that I’m misremembering has reduced my confidence in my recollection.
My apologies, it wasn’t my intention to do that. Careless oversight.