My guess is that this is a misreading of Eliezer’s stance that you should not aim for CEV as your first goal with a superintelligence that you think is plausibly aligned. Quoting the wiki:
CEV is rather complicated and meta and hence not intended as something you’d do with the first AI you ever tried to build.
CEV seems much much more difficult than strawberry alignment and I have written it off as a potential option for a baby’s first try at constructing superintelligence.
To be clear, I also expect that strawberry alignment is too hard for these babies and we’ll just die. But things can always be even more difficult, and with targeting CEV on a first try, it sure would be.
There’s zero room, there is negative room, to give away to luxury targets like CEV. They’re not even going to be able to do strawberry alignment, and ify some miracle we were able to do strawberry alignment and so humanity survived, that miracle would not suffice to get CEV right on the first try.
My guess is that this is a misreading of Eliezer’s stance that you should not aim for CEV as your first goal with a superintelligence that you think is plausibly aligned. Quoting the wiki:
Also:-