The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.