Three other examples of convergence theories are Roko’s UIV, Hollerith’s GSZ, and Omohundro’s “Basic AI Drives”. These also postulate a process of convergence through rational AI self-improvement. But they tend to be less optimistic than CEV, while at the same time somewhat more detailed in their characterization of the ethical endpoint.
I wouldn’t say that any of those three are “less optimistic” than CEV; GS0 and UIV are just competing normative proposals, and the AI Drives are what you get out of most self-improving goal systems by default, and can be overridden. (And CEV isn’t about optimism anyway — it’s a goal, not a prediction, and in that capacity, it’s actually fairly pessimistic, going by the variety of possible failures it tries to account for.)
I guess I am taking CEV to be defined by the process of convergence that produces it. And I see optimism in the claim that this process will produce a happy result. I will agree that the ‘optimism’ that I am talking about here is not some kind of naive, blind optimism.
I wouldn’t say that any of those three are “less optimistic” than CEV; GS0 and UIV are just competing normative proposals, and the AI Drives are what you get out of most self-improving goal systems by default, and can be overridden. (And CEV isn’t about optimism anyway — it’s a goal, not a prediction, and in that capacity, it’s actually fairly pessimistic, going by the variety of possible failures it tries to account for.)
I guess I am taking CEV to be defined by the process of convergence that produces it. And I see optimism in the claim that this process will produce a happy result. I will agree that the ‘optimism’ that I am talking about here is not some kind of naive, blind optimism.