In addition to the problems with specific proposals below, many Oracular non-AI proposals are based on powerful metacomputation, e.g. Solomonoff induction or program evolution, and therefore incur the generic metacomputational hazards: they may accidentally perform morally bad computations (e.g. suffering sentient programs or human simulations), they may stumble upon and fail to sandbox an Unfriendly AI, or they may fall victim to ambient control by a superintelligence. Other unknown metacomputational hazards may also exist.
Every time I read something like this I think, “Wow okay, from a superficially point of view this sounds like a logical possibility. But is it physically possible? If so, is it economically and otherwise feasible? What evidence do you have?”.
You use math like “Solomonoff induction” as if it described part of the territory rather than being symbols and syntactic rules, scribbles on paper. To use your terminology and heuristics, I think that the Kolmogorov complexity of “stumble upon and fail to sandbox an Unfriendly AI” is extremely high.
I just noticed that even Ben Goertzel, who is apparently totally hooked on the possibility of superhuman intelligence, agrees with me...
… but please bear in mind that the relation of Solomonoff induction and “Universal AI” to real-world general intelligence of any kind is also rather wildly speculative… This stuff is beautiful math, but does it really have anything to do with real-world intelligence? These theories have little to say about human intelligence, and they’re not directly useful as foundations for building AGI systems (though, admittedly, a handful of scientists are working on “scaling them down” to make them realistic; so far this only works for very simple toy problems, and it’s hard to see how to extend the approach broadly to yield anything near human-level AGI). And it’s not clear they will be applicable to future superintelligent minds either, as these minds may be best conceived using radically different concepts.
Every time I read something like this I think, “Wow okay, from a superficially point of view this sounds like a logical possibility. But is it physically possible? If so, is it economically and otherwise feasible? What evidence do you have?”.
You use math like “Solomonoff induction” as if it described part of the territory rather than being symbols and syntactic rules, scribbles on paper. To use your terminology and heuristics, I think that the Kolmogorov complexity of “stumble upon and fail to sandbox an Unfriendly AI” is extremely high.
I just noticed that even Ben Goertzel, who is apparently totally hooked on the possibility of superhuman intelligence, agrees with me...