Ben, you could be right that my “world is too fuzzy” view is just mind projection, but let me at least explain what I am projecting. The most natural way to get “unlimited” control over matter is a pure reductionist program in which a formal mathematical logic can represent designs and causal relationships with perfect accuracy (perfect to the limits of quantum probabilities). Unfortunately, combinatorial explosion makes that impractical. What we can actually do instead is redescribe collections of matter in new terms. Sometimes these are neatly linked to the underlying physics and we get cool stuff like f=ma but more often the redescriptions are leaky but useful “concepts”. The fact that we have to leak accuracy (usually to the point where definitions themselves are basically impossible) to make dealing with the world tractable is what I mean by “the world is too fuzzy to support much intelligent manipulation”. In certain special cases we come up with clever ways to bound probabilities and produce technological wonders… but transhumanist fantasies usually make the leap to assume that all things we desire can be tamed in this way. I think this is a wild leap. I realize most futurists see this as unwarranted pessimism and that the default position is that anything imaginable that doesn’t provably violate the core laws of physics only awaits something smart enough to build it.
My other reasons for doubting the ultimate capabilities of RSI probably don’t need more explanation. My skepticism about the imminence of RSI as a threat (never mind the overall ability of RSI itself) is more based on the ideas that 1) The world is really damn complicated and it will take a really damn complicated computer to make sense of it (the vast human data sorting machinery is well beyond Roadrunner and is not that capable anyway), and 2) there is still no beginning of a credible theory of how to make sense of a really damn complicated world with software.
I agree it is “very dangerous” to put a low probability on any particular threat being an imminent concern. Many such threats exist and we make this very dangerous tentative conclusion every day… from cancer in our own bodies to bioterror to the possibility that our universe is a simulation designed to measure how long it takes us to find the mass of the Higgs, after which we will be shut off.
That is all just an aside though to my main point, which was that if I’m wrong, the only conclusion I can see is that an explicit program to take over the world with a Friendly AI is the only reasonable option.
I approve of such an effort. If my skepticism is correct it will be impossible for decades at least; if I’m wrong I’d rather have an RSI that at least tried to be Friendly. It does seem that the Friendliness bit is more important than the RSI part as the start of such an effort.
Ben, you could be right that my “world is too fuzzy” view is just mind projection, but let me at least explain what I am projecting. The most natural way to get “unlimited” control over matter is a pure reductionist program in which a formal mathematical logic can represent designs and causal relationships with perfect accuracy (perfect to the limits of quantum probabilities). Unfortunately, combinatorial explosion makes that impractical. What we can actually do instead is redescribe collections of matter in new terms. Sometimes these are neatly linked to the underlying physics and we get cool stuff like f=ma but more often the redescriptions are leaky but useful “concepts”. The fact that we have to leak accuracy (usually to the point where definitions themselves are basically impossible) to make dealing with the world tractable is what I mean by “the world is too fuzzy to support much intelligent manipulation”. In certain special cases we come up with clever ways to bound probabilities and produce technological wonders… but transhumanist fantasies usually make the leap to assume that all things we desire can be tamed in this way. I think this is a wild leap. I realize most futurists see this as unwarranted pessimism and that the default position is that anything imaginable that doesn’t provably violate the core laws of physics only awaits something smart enough to build it.
My other reasons for doubting the ultimate capabilities of RSI probably don’t need more explanation. My skepticism about the imminence of RSI as a threat (never mind the overall ability of RSI itself) is more based on the ideas that 1) The world is really damn complicated and it will take a really damn complicated computer to make sense of it (the vast human data sorting machinery is well beyond Roadrunner and is not that capable anyway), and 2) there is still no beginning of a credible theory of how to make sense of a really damn complicated world with software.
I agree it is “very dangerous” to put a low probability on any particular threat being an imminent concern. Many such threats exist and we make this very dangerous tentative conclusion every day… from cancer in our own bodies to bioterror to the possibility that our universe is a simulation designed to measure how long it takes us to find the mass of the Higgs, after which we will be shut off.
That is all just an aside though to my main point, which was that if I’m wrong, the only conclusion I can see is that an explicit program to take over the world with a Friendly AI is the only reasonable option.
I approve of such an effort. If my skepticism is correct it will be impossible for decades at least; if I’m wrong I’d rather have an RSI that at least tried to be Friendly. It does seem that the Friendliness bit is more important than the RSI part as the start of such an effort.