At a workshop following last year’s Singularity Summit, every attendee expressed the > wish that brain emulation would arrive before AGI. I get the definite impression that
those wishes stems mainly from fears of hard takeoff, and not from optimism about
brain emulation per se.
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility
even when utility is defined the “popular” way rather than the rather outre way I define it.)
Are you sure this wasn’t a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?
Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a “result”) for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.
But to answer your question in case you are asking out of curiosity rather than to forward the discussion on “controlled dissemination”: well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the “popular” way rather than the rather outre way I define it.)
Yes, this was a question about curiosity of the responses not in regards specifically to the issue of controlled dissemination.