AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
AI will be developed by a small team (at this time) in secret
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
Things that stretch my credibility.
AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I feel we are going to get stuck in an AI bog. However… This seems to neglect linguistic information.
Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.
What is the relevance of the fact that the word “car” refers to cars to this model? None directly.
Now if I was to tell you that “there is a car leaving at 2pm”, then it would become relevant assuming you trusted what I said.
A lot of real world AI is not about collecting examples of basic input output pairings.
AIXI deals with this by simulating humans and hoping that that is the smallest world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.