David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.
Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.
Do you think I’ve satisfied your request for examples of substantive disagreements? (I’d rather not go into object-level arguments since that’s not what this post is about.)
What I mean, is that I think most of the critics would agree that the approaches which they see as far fetched (and which you say they ‘disagree’ about) are still much more realizable than FAI.
Furthermore, the arguments are highly conditional on specific speculations which are taken to be true for sake of the argument. For example, if I am to assume that unfriendly AI would destroy the world, but such can be prevented with FAI, it means that the AI that is of the kind that is actually designed and can be controlled can be done in time. The algorithms relevant to making it cull the search space to manageable size are also highly relevant to the tools for solving of all sorts of technological problems including biomedical research for mind uploading. This line of argument by no means implies that I believe the mind uploading to be likely.
Furthermore the ‘provably friendly’ implies existence of much superior techniques of design of provably-something software; proving absence of e.g. buffer overruns and SQL injections is a task much more readily achievable.
It would be incredibly difficult to track all the cross-dependencies and rank the future technologies in the order of the appearance (something that may well have lower utility than just picking one and working on it), but you do not need to do that to see that some particularly spectacular solution (which is practically reliant on everything, including neurology for sake of figuring out and formally specifying what constitutes human in such a way that superintelligence wouldn’t come up with some really weird interpretation) is much further down the timeline than the other solutions.
David Dalrymple criticized FAI and is working directly on mind uploading today, so apparently he disagrees with both you and Nick Szabo.
Nick Szabo explicitly suggested working on computer security so he seems to disagree with you about the utility. I disagree with you and him about whether provably unhackable software is feasible.
Do you think I’ve satisfied your request for examples of substantive disagreements? (I’d rather not go into object-level arguments since that’s not what this post is about.)
What I mean, is that I think most of the critics would agree that the approaches which they see as far fetched (and which you say they ‘disagree’ about) are still much more realizable than FAI.
Furthermore, the arguments are highly conditional on specific speculations which are taken to be true for sake of the argument. For example, if I am to assume that unfriendly AI would destroy the world, but such can be prevented with FAI, it means that the AI that is of the kind that is actually designed and can be controlled can be done in time. The algorithms relevant to making it cull the search space to manageable size are also highly relevant to the tools for solving of all sorts of technological problems including biomedical research for mind uploading. This line of argument by no means implies that I believe the mind uploading to be likely.
Furthermore the ‘provably friendly’ implies existence of much superior techniques of design of provably-something software; proving absence of e.g. buffer overruns and SQL injections is a task much more readily achievable.
It would be incredibly difficult to track all the cross-dependencies and rank the future technologies in the order of the appearance (something that may well have lower utility than just picking one and working on it), but you do not need to do that to see that some particularly spectacular solution (which is practically reliant on everything, including neurology for sake of figuring out and formally specifying what constitutes human in such a way that superintelligence wouldn’t come up with some really weird interpretation) is much further down the timeline than the other solutions.