That’s why I said ’one or two orders of magnitude”
That’s not the part of your post I was criticizing. I was criticizing this:
And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
Which doesn’t seem to be a good model of how Google servers work.
Show me where I expressed a confidence level in that post.
Confidence in English can be expressed non-numerically. Here’s a few sentences that seemed brazenly overconfident to me:
I know when the singularity will occur
(Sensationalized title.)
I can give you 2.3 bits of further information on when the Singularity will occur
(The number of significant digits you’re counting on your measure of transmitted information implies confidence that I don’t think you should possess.)
So the first bootstrapping AI will be created at Google. It will be designed to use Google’s massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
One variant of the “foom” argument is that software that is “about as intelligent as a human” and runs on a desktop can escape … If the software can’t grab many more computational resources than it was meant to run with, because those resources don’t exist, that means it has to foom on raw intelligence … A small AI needs to be written in a much more clever manner …
Did you even read my counter-argument?
It seems to me like an AI with all of Google’s servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it’s likely to have insights that humans are incapable of.
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven’t supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn’t seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
A more important implication is that this scenario decreases the possibility of FOOM
I don’t buy it. At best, it doesn’t foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.
That’s not the part of your post I was criticizing. I was criticizing this:
Which doesn’t seem to be a good model of how Google servers work.
Confidence in English can be expressed non-numerically. Here’s a few sentences that seemed brazenly overconfident to me:
(Sensationalized title.)
(The number of significant digits you’re counting on your measure of transmitted information implies confidence that I don’t think you should possess.)
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
Did you even read my counter-argument?
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven’t supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn’t seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
I don’t buy it. At best, it doesn’t foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
The confidence I expressed linguistically was to avoid making the article boring. It shouldn’t matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I’m concerned, is that an AI built by a large corporation for a large computational grid doesn’t have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I’ll go out on a limb here and say I know (for Bayesian values of the word “know”) resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn’t depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.