I said I considered destroying “civilization” to be unlikely. Going by this:
progressing toward advanced AGI without a design for “provably non-dangerous AGI” (or something closely analogous, often called “Friendly AI” in SIAI lingo) is highly likely to lead to an involuntary end for the human race.
...the scary idea claims to be about “the human race”. I don’t define “civilization” in a human-centric way—so I don’t class those as being the same thing—for instance, I think that civilization might well continue after an “involuntary” robot takeover.
Well, a civilization with humanity all dead is pretty much certainly not what we want. I don’t care if in the grand scheme of things, this isn’t a win/lose game. I think I have something like a utility function, and I want it maximized, period.
Back to my question: do you see any other path to building a future we want than the one I described?
Well, a civilization with humanity all dead is pretty much certainly not what we want.
Well, humans will live on via historical simulations, with pretty good probability. Humans won’t remain the dominant species, though. Those hoping for that have unrealistic expectations. Machines won’t remain human tools, they are likely to be in charge.
I think I have something like a utility function, and I want it maximized, period.
Sure, but it’s you and billions of other organisms—with often-conflicting futures in mind—and so most won’t have things their way.
do you see any other path to building a future we want than the one I described?
IIRC, your proposal put considerable emphasis on proof. We’ll prove what we can, but proof often lags far behind the leading edge of computer science. There are many other approaches to building mission critical systems incrementally—I expect we will make more use those.
Historical simulations: assuming it preserves identity etc, why not…
Utility function: I know that my chances of maximizing my utiliy function are quite… slim, to say the least.
Path to best future(humanity): proofs do not lag so far behind right now. Modern type systems are now pretty good, and we have proof assistants that makes the “prove your whole program” quite feasible –though not cheap yet. Plus, the leading edge is generally the easiest to prove, because it tends to lie on solid mathematical ground. We don’t do proofs because they’re generally expensive, and we use ancient technologies that leak lots of low-level details, and make proofs much harder. (I program for a living.)
But I see at least the possibility of a slightly different path: still take precautions, just don’t prove the thing.
Oh, and I forgot: if we solve safety before capability, incrementally designing the AI by trial-and-error would be quite reasonable. The definite milestone will be harder to define in this case. I guess I’ll have to update a bit.
Are you saying that you don’t buy the scary idea?
I said I considered destroying “civilization” to be unlikely. Going by this:
...the scary idea claims to be about “the human race”. I don’t define “civilization” in a human-centric way—so I don’t class those as being the same thing—for instance, I think that civilization might well continue after an “involuntary” robot takeover.
Well, a civilization with humanity all dead is pretty much certainly not what we want. I don’t care if in the grand scheme of things, this isn’t a win/lose game. I think I have something like a utility function, and I want it maximized, period.
Back to my question: do you see any other path to building a future we want than the one I described?
Well, humans will live on via historical simulations, with pretty good probability. Humans won’t remain the dominant species, though. Those hoping for that have unrealistic expectations. Machines won’t remain human tools, they are likely to be in charge.
Sure, but it’s you and billions of other organisms—with often-conflicting futures in mind—and so most won’t have things their way.
IIRC, your proposal put considerable emphasis on proof. We’ll prove what we can, but proof often lags far behind the leading edge of computer science. There are many other approaches to building mission critical systems incrementally—I expect we will make more use those.
Historical simulations: assuming it preserves identity etc, why not…
Utility function: I know that my chances of maximizing my utiliy function are quite… slim, to say the least.
Path to best future(humanity): proofs do not lag so far behind right now. Modern type systems are now pretty good, and we have proof assistants that makes the “prove your whole program” quite feasible –though not cheap yet. Plus, the leading edge is generally the easiest to prove, because it tends to lie on solid mathematical ground. We don’t do proofs because they’re generally expensive, and we use ancient technologies that leak lots of low-level details, and make proofs much harder. (I program for a living.)
But I see at least the possibility of a slightly different path: still take precautions, just don’t prove the thing.
Oh, and I forgot: if we solve safety before capability, incrementally designing the AI by trial-and-error would be quite reasonable. The definite milestone will be harder to define in this case. I guess I’ll have to update a bit.