But I have not seen any kind of vision painted for how you avoid a bad future, for any length of time, that doesn’t involve some kind of process that is just… pretty godlike?
I’m mostly with you all the way up to and including this line. But I would also add: I have not seen a plausible vision painted for how you avoid a bad future, for any length of time, that does involve some kind of process that is just pretty godlike.
This is why I put myself in the “muddle through” camp. It’s not because I think doing so guarantees a good outcome; indeed I’d be hard-pressed even to say it makes it likely. It’s just that by trying to do more than that — to chart a path through territory that we can’t currently even see — we are likely to make the challenge even harder.
Consider someone in 1712 observing the first industrial steam engines, recognizing the revolutionary potential, and wanting to … make sure it goes well. Perhaps they can anticipate its use outside of coal mines — in mills and ships and trains. But there’s just no way they can envision all of the downstream consequences: electricity, radio and television, aircraft, computing, nuclear weapons, the Internet, Twitter, the effect Twitter will have on American democracy (which by the way doesn’t exist yet…), artificial intelligence, and so on. Any attempt someone would have made, at that time, to design, in detail, a path from the steam engine to a permanently good future would just have been guaranteed at the very least to fail, and probably to make things much worse to the extent they locally succeed in doing anything drastic.
Our position is in many ways more challenging than theirs. We have to be humble about how far into the future we can see. I agree that an open society comes with great danger and it’s hard to see how that goes well in the face of rapid technological change. But so too is it hard to see how centralized power over the future leads to a good outcome, especially if the power centralization begins today, in an era when those who would by default possess that power seem to be … extraordinarily cruel and unenlightened. Just as you, rightly, cannot say if AIs who replace us would have any moral value, I also cannot say that an authoritarian future has any value. Indeed, I cannot even say that its value is not hugely negative.
What I can say, however, is that we have some clear problems directly in front of us, either occurring right now or definitely in sight, one of which is this very possibility of a centralized, authoritarian future, from which we would have no escape. I support muddling through only because I see no alternative.
Nod, I deliberately titled a section There is no safe “muddling through” without perfect safeguards (in an earlier draft I did just say “there is no safe muddling through”, and then was like “okay, that’s false, because, seems totally plausible muddle through into figuring out longerterm safeguards,
(and, in fact, I don’t have a plan to get longterm safeguards that don’t look like some kind of muddling through, in some sense)
I was just chatting with @1a3orn, and he brought up a similar point to the industrial revolution concern, and I totally agree.
Some background assumptions I have here:
you can’t reason your way all the way to “safely navigate the industrial revolution”, yeah. Some notable failures:
inventing communism
trying to invent the cotton gin to make slavery less bad, accidentally produce way more slavery via inducing demand
environmentalism ending up banning nuclear stuff that caused a lot of environmental damage
(there are positive examples too I think, but the existence of these negative examples should put the fear of god in you)
it’s still possible to do nonzero reasoning ahead. You can put constraints on what sort of things possibly make sense to be doing.
early industrial revolution: if you don’t see the first steam train and think “oh shit, everything is gonna change”, man you are going to be pointed in the wrong direction completely
analogously: if you don’t look at the oncoming AI (as well as general economic trends), and think “man, All Possible Views About Humanity’s Future Are Wild”, you’re not pointed in the right direction at all
Part of the point of this post was to lay out “here’s the rough class of thing that seems like it’s gonna happen by default. Seems like either we need to learn new facts, or, we need a process with an extreme amount of power and wisdom, or, we should expect some cluster of bad things to probably happen.
During my chat with 1a3orn, I did notice:
Okay, if I’m trying to solve the ‘death by evolution’ problem (assuming we got nice smooth takeoff still), an alternate plan from “build the machine god” is:
Send human-uploads with some von-neuman probes to every star in the universe, immediately, before we leave The Dreamtime. And then probably there will at least be a lot of subjective experience-timeslices and chances for some of them to figure out how to make good things happen, with (maybe) like a 10 year head start before hollow grabby AI comes after them.
I don’t actually believe in nice slow takeoff or 10 year lead times before Hollow Grabby AI comes after them, but, if I did, that’d at least be a coherent plan.
The problems with that is that are:
a) it’s still leaving a lot of risk of costly war between the human diaspora and the Hollow Grabby AI
b) many of the humans across the universe are probably going to do horrible S-risky mindcrime.
So, I’m not very satisfied with that plan, but I mention it to help broaden the creative range of solutions from “build a CEV god” to include at least one other type of option.
I’m mostly with you all the way up to and including this line. But I would also add: I have not seen a plausible vision painted for how you avoid a bad future, for any length of time, that does involve some kind of process that is just pretty godlike.
This is why I put myself in the “muddle through” camp. It’s not because I think doing so guarantees a good outcome; indeed I’d be hard-pressed even to say it makes it likely. It’s just that by trying to do more than that — to chart a path through territory that we can’t currently even see — we are likely to make the challenge even harder.
Consider someone in 1712 observing the first industrial steam engines, recognizing the revolutionary potential, and wanting to … make sure it goes well. Perhaps they can anticipate its use outside of coal mines — in mills and ships and trains. But there’s just no way they can envision all of the downstream consequences: electricity, radio and television, aircraft, computing, nuclear weapons, the Internet, Twitter, the effect Twitter will have on American democracy (which by the way doesn’t exist yet…), artificial intelligence, and so on. Any attempt someone would have made, at that time, to design, in detail, a path from the steam engine to a permanently good future would just have been guaranteed at the very least to fail, and probably to make things much worse to the extent they locally succeed in doing anything drastic.
Our position is in many ways more challenging than theirs. We have to be humble about how far into the future we can see. I agree that an open society comes with great danger and it’s hard to see how that goes well in the face of rapid technological change. But so too is it hard to see how centralized power over the future leads to a good outcome, especially if the power centralization begins today, in an era when those who would by default possess that power seem to be … extraordinarily cruel and unenlightened. Just as you, rightly, cannot say if AIs who replace us would have any moral value, I also cannot say that an authoritarian future has any value. Indeed, I cannot even say that its value is not hugely negative.
What I can say, however, is that we have some clear problems directly in front of us, either occurring right now or definitely in sight, one of which is this very possibility of a centralized, authoritarian future, from which we would have no escape. I support muddling through only because I see no alternative.
Nod, I deliberately titled a section There is no safe “muddling through” without perfect safeguards (in an earlier draft I did just say “there is no safe muddling through”, and then was like “okay, that’s false, because, seems totally plausible muddle through into figuring out longerterm safeguards,
(and, in fact, I don’t have a plan to get longterm safeguards that don’t look like some kind of muddling through, in some sense)
I was just chatting with @1a3orn, and he brought up a similar point to the industrial revolution concern, and I totally agree.
Some background assumptions I have here:
you can’t reason your way all the way to “safely navigate the industrial revolution”, yeah. Some notable failures:
inventing communism
trying to invent the cotton gin to make slavery less bad, accidentally produce way more slavery via inducing demand
environmentalism ending up banning nuclear stuff that caused a lot of environmental damage
(there are positive examples too I think, but the existence of these negative examples should put the fear of god in you)
it’s still possible to do nonzero reasoning ahead. You can put constraints on what sort of things possibly make sense to be doing.
early industrial revolution: if you don’t see the first steam train and think “oh shit, everything is gonna change”, man you are going to be pointed in the wrong direction completely
analogously: if you don’t look at the oncoming AI (as well as general economic trends), and think “man, All Possible Views About Humanity’s Future Are Wild”, you’re not pointed in the right direction at all
Part of the point of this post was to lay out “here’s the rough class of thing that seems like it’s gonna happen by default. Seems like either we need to learn new facts, or, we need a process with an extreme amount of power and wisdom, or, we should expect some cluster of bad things to probably happen.
During my chat with 1a3orn, I did notice:
Okay, if I’m trying to solve the ‘death by evolution’ problem (assuming we got nice smooth takeoff still), an alternate plan from “build the machine god” is:
Send human-uploads with some von-neuman probes to every star in the universe, immediately, before we leave The Dreamtime. And then probably there will at least be a lot of subjective experience-timeslices and chances for some of them to figure out how to make good things happen, with (maybe) like a 10 year head start before hollow grabby AI comes after them.
I don’t actually believe in nice slow takeoff or 10 year lead times before Hollow Grabby AI comes after them, but, if I did, that’d at least be a coherent plan.
The problems with that is that are:
a) it’s still leaving a lot of risk of costly war between the human diaspora and the Hollow Grabby AI
b) many of the humans across the universe are probably going to do horrible S-risky mindcrime.
So, I’m not very satisfied with that plan, but I mention it to help broaden the creative range of solutions from “build a CEV god” to include at least one other type of option.