Why is his pessimistic (realistic?) take down voted without counterargument? Hankx isn’t in negative karma so I don’t think he isn’t usually disruptive and I think he is making this argument in good faith.
Well I didn’t really substantively defend my position with reasons, and heaping on all the extra adjectives didn’t help :P
I was trying to figure out how to strike-through the unsupported adjectives, now I can’t figure out how to un-retract the comment… bleh what a mess.
While I still agree with all the adjectives, I’ll take them out to be less over the top. Here’s what the edit should say:
I’d argue this entire exercise is an indictment of Eliezer’s approach to Friendly AI. The notion of a formal, rigorous success of “Friendliness theory” coming BEFORE the Singularity is astronomically improbable.
What a Friendly Singularity will actually look like is an AGI researcher or researchers forging ahead at an insane risk to themselves and the rest of humanity, and then somehow managing the improbable task of not annihilating humanity through intensive, inherently faulty, safety engineering, before later eventually realizing a formal solution to Friendliness theory post-Singularity. And of course it goes without saying that the odds are heavily against any such safety mechanisms succeeding, let alone the odds that they will ever even be attempted.
Suffice to say a world in which we are successfully prepared to implement Friendly AI is unimaginable at this point.
And just to give some indication of where I’m coming from, I would say that this conclusion follows pretty directly if you buy Eliezer’s arguments in the sequences and elsewhere about locality and hard-takeoff, combined with his arguments that FAI is much harder than AGI. (see e.g. here)
Of course I have to wonder, is Eliezer holding out to try to “do the impossible” in some pipe dream FAI scenario like the OP imagines, or does he agree with this argument but still thinks he’s somehow working in the best way possible to support this more realistic scenario when it comes up?
Why is his pessimistic (realistic?) take down voted without counterargument? Hankx isn’t in negative karma so I don’t think he isn’t usually disruptive and I think he is making this argument in good faith.
Well I didn’t really substantively defend my position with reasons, and heaping on all the extra adjectives didn’t help :P
I was trying to figure out how to strike-through the unsupported adjectives, now I can’t figure out how to un-retract the comment… bleh what a mess.
While I still agree with all the adjectives, I’ll take them out to be less over the top. Here’s what the edit should say:
And just to give some indication of where I’m coming from, I would say that this conclusion follows pretty directly if you buy Eliezer’s arguments in the sequences and elsewhere about locality and hard-takeoff, combined with his arguments that FAI is much harder than AGI. (see e.g. here)
Of course I have to wonder, is Eliezer holding out to try to “do the impossible” in some pipe dream FAI scenario like the OP imagines, or does he agree with this argument but still thinks he’s somehow working in the best way possible to support this more realistic scenario when it comes up?
Strikethrough by using double tildes on each side of the struck-through portion: ~~unsupported adjective~~ ⇒ unsupported adjective.