I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
EY is probably wrong. While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits. When there are thousands of variables and finite and noisy data (like most medical data), superintelligences will very likely be almost as stymied as humans are*.
Of course, what a superintelligence could do is ask for the smallest number of experiments to deconfuse the various competing theories, and/or analyze far more data than any living human is capable of. A superintelligence could recalculate their priors or flush their priors. They could ultimately solve medical problems at a pace that humans cannot.
*another way to look at it. Imagine a ‘sherlock holmes’ set of reasoning. Now realize that for every branch in a story where sherlock ’deduces that this pipe tobacco combined with these footprints mean...” there are thousands of other valid possibilities that also fit the data. Weak data creates a very large number of permutations of valid world states consistent with it. A human may get “stuck” on the wrong branch, lacking the cognitive capacity to consider the others, while a superintelligence may be able to consider thousands of the possibilities in memory. Either way, neither knows which branches are correct.
What EY is correct is a superintelligence could then consider many possible experiments, and find the ones that have the most information gain. Perfect experiments that give perfect clean bits reduce the number of permutations by half with each clean bit of information gain. (note that EY, again, is probably wrong in that there may often not be experiments that produce data that clean)
I think I’ve read EY state somewhere that a superintelligence would need to perform very few or even zero experiments to find out a lot (or even most? all?) true things about our universe that we humans need painstaking effort and experiments for.
EY is probably wrong. While more intelligence allows performing deeper analysis, which can sometimes extract the independent variables from a complex problem, or find the right action, from less data, there are limits. When there are thousands of variables and finite and noisy data (like most medical data), superintelligences will very likely be almost as stymied as humans are*.
Of course, what a superintelligence could do is ask for the smallest number of experiments to deconfuse the various competing theories, and/or analyze far more data than any living human is capable of. A superintelligence could recalculate their priors or flush their priors. They could ultimately solve medical problems at a pace that humans cannot.
*another way to look at it. Imagine a ‘sherlock holmes’ set of reasoning. Now realize that for every branch in a story where sherlock ’deduces that this pipe tobacco combined with these footprints mean...” there are thousands of other valid possibilities that also fit the data. Weak data creates a very large number of permutations of valid world states consistent with it. A human may get “stuck” on the wrong branch, lacking the cognitive capacity to consider the others, while a superintelligence may be able to consider thousands of the possibilities in memory. Either way, neither knows which branches are correct.
What EY is correct is a superintelligence could then consider many possible experiments, and find the ones that have the most information gain. Perfect experiments that give perfect clean bits reduce the number of permutations by half with each clean bit of information gain. (note that EY, again, is probably wrong in that there may often not be experiments that produce data that clean)