I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.
You didn’t respond to my point that defending against this type of technology does seem to require solving hard philosophical problems. What are your thoughts on this?
Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything).
The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven’t totally worked this out.
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
It’s hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.
I agree that selection bias is a problem. I plan on discussing and writing about AI alignment somewhat in the future. Also note that Eliezer and Nate think the problem is pretty hard and unlikely to be solved.
Automation technology (in an adversarial context) is kind of like a very big gun. It projects a lot of force. It can destroy lots of things if you point it wrong. It might be hard to point at the right target. And you might kill or incapacitate yourself if you do something wrong. But it’s inherently stupid, and has no agency by itself. You don’t have to solve philosophy to deal with large guns, you just have to do some combination of (a) figure out how to wield them to do good with them, (b) get people to stop using them, (c) find strategies for fighting against them, or (d) defend against them. (Certainly, some of these things involve philosophy, but they don’t necessarily require fully formalizing anything). The threat is different in kind from that of a fully-automated autopoietic cognitive system, which is more like a big gun possessed by an alien soul.
Do you have ideas for how to do these things, for the specific “big gun” that I described earlier?
If the big gun is being wielded by humans whose values and thought processes have been corrupted (by others using that big gun, or through some other way like being indoctrinated in bad ideas from birth), that doesn’t seem very different from a big gun possessed by an alien soul.
Roughly, minimize direct contact with things that cause insanity, be the sanest people around, and as a result be generally more competent than the rest of the world at doing real things. At some point use this capacity to oppose things that cause insanity. I haven’t totally worked this out.
It’s hard to corrupt human values without corrupting other forms of human sanity, such as epistemics and general ability to do things.