I understand what you’re saying, and I’ve heard that answer before, repeatedly; and I don’t buy it.
Suppose we were arguing about the theory of evolution in the 19th century, and I said, “Look, this theory just doesn’t work, because our calculations indicate that selection doesn’t have the power necessary.” That was the state of things around the turn of the century, when genetic inheritance was assumed to be analog rather than discrete.
An acceptable answer would be to discover that genes were discrete things that an organism had just 2 copies of, and that one was often dominant, so that the equations did in fact show that selection had the necessary power.
An unacceptable answer would be to say, “What definition of evolution are you using? Evolution makes organisms evolve! If what you’re talking about doesn’t lead to more complex organisms, then it isn’t evolution.”
Just saying “Organisms become more complex over time” is not a theory of evolution. It’s more like an observation of evolution. A theory means you provide a mechanism and argue convincingly that it works. To get to a theory of CEV, you need to define what it’s supposed to accomplish, propose a mechanism, and show that the mechanism might accomplish the purpose.
You don’t have to get very far into this analysis to see why the answer you’ve given doesn’t, IMHO, work. I’ll try to post something later this afternoon on why.
I won’t get around to posting that today, but I’ll just add that I know that the intent of CEV is to solve the problems I’m complaining about. I know there are bullet points in the CEV document that say, “Renormalizing the dynamic”, “Caring about volition,” and, “Avoid hijacking the destiny of humankind.”
But I also know that the CEV document says,
Since the output of the CEV is one of the major forces shaping the future, I’m still pondering the order-of-evaluation problem to prevent this from becoming an infinite recursion.
and
It may be hard to get CEV right—come up with an AI dynamic such that our volition, as defined, is what we intuitively want. The technical challenge may be too hard; the problems I’m still working out may be impossible or ill-defined. I don’t intend to trust any design until I see that it works, and only to the extent I see that it works. Intentions are not always realized.
I think there is what you could call an order-of-execution problem, and I think there’s a problem with things being ill-defined, and I think the desired outcome is logically impossible. I could be wrong. But since Eliezer worries that this could be the case, I find it strange that Eliezer’s bulldogs are so sure that there are no such problems, and so quick to shoot down discussion of them.
I understand what you’re saying, and I’ve heard that answer before, repeatedly; and I don’t buy it.
Suppose we were arguing about the theory of evolution in the 19th century, and I said, “Look, this theory just doesn’t work, because our calculations indicate that selection doesn’t have the power necessary.” That was the state of things around the turn of the century, when genetic inheritance was assumed to be analog rather than discrete.
An acceptable answer would be to discover that genes were discrete things that an organism had just 2 copies of, and that one was often dominant, so that the equations did in fact show that selection had the necessary power.
An unacceptable answer would be to say, “What definition of evolution are you using? Evolution makes organisms evolve! If what you’re talking about doesn’t lead to more complex organisms, then it isn’t evolution.”
Just saying “Organisms become more complex over time” is not a theory of evolution. It’s more like an observation of evolution. A theory means you provide a mechanism and argue convincingly that it works. To get to a theory of CEV, you need to define what it’s supposed to accomplish, propose a mechanism, and show that the mechanism might accomplish the purpose.
You don’t have to get very far into this analysis to see why the answer you’ve given doesn’t, IMHO, work. I’ll try to post something later this afternoon on why.
I won’t get around to posting that today, but I’ll just add that I know that the intent of CEV is to solve the problems I’m complaining about. I know there are bullet points in the CEV document that say, “Renormalizing the dynamic”, “Caring about volition,” and, “Avoid hijacking the destiny of humankind.”
But I also know that the CEV document says,
and
I think there is what you could call an order-of-execution problem, and I think there’s a problem with things being ill-defined, and I think the desired outcome is logically impossible. I could be wrong. But since Eliezer worries that this could be the case, I find it strange that Eliezer’s bulldogs are so sure that there are no such problems, and so quick to shoot down discussion of them.