CEV is a bizarre wishlist, apparently made with minimal consideration of implementation difficulties …
It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.
I think CEV is one proposal to consider, useful to focus discussion. I hate it, myself, and suspect that the majority of mankind would agree. I don’t want some machine that I have never met and don’t trust to be inferring my volition and acting on my behalf. The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI nor FAI ever happen. But, seen as an attempt to stimulate discussion, I think that the paper is great. And maybe discussion might improve the proposal enough to alleviate my concerns. Or discussion might show me that my concerns are baseless.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.
It is what the software professionals would call a preliminary requirements
document. You are not supposed to worry about implementation difficulties at
that stage of the process. Harsh reality will get its chance to force compromises later.
What—not at all? You want the moon-onna-stick—so that goes into your “preliminary requirements” document?
Yes. Because there is always the possibility that some smart geek will say “‘moon-onna-stick’, huh? I bet I could do that. I see a clever trick.” Or maybe some other geek will say “Would you settle for Sputnik-on-a-stick?” and the User will say “Well, yes. Actually, that would be even better.”
At least that is what they preach in the Process books.
It sounds pretty surreal to me. I would usually favour some reality-imposed limits to fantasizing and wishful thinking from the beginning—unless there are practically no time constraints at all.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be
scaled up enough so as to improve the analytic capabilities of a sufficiently
large fraction of mankind so that proposals like CEV will not encounter
significant opposition.
If there was ever any real chance of success, governments would be likely to object. Since they already have power, they are not going to want a bunch of geeks in a basement taking over the world with their intelligent machine—and redistributing all their assets for them.
It is what the software professionals would call a preliminary requirements document. You are not supposed to worry about implementation difficulties at that stage of the process. Harsh reality will get its chance to force compromises later.
I think CEV is one proposal to consider, useful to focus discussion. I hate it, myself, and suspect that the majority of mankind would agree. I don’t want some machine that I have never met and don’t trust to be inferring my volition and acting on my behalf. The whole concept makes me want to go out and join some Luddite organization dedicated to making sure neither UFAI nor FAI ever happen. But, seen as an attempt to stimulate discussion, I think that the paper is great. And maybe discussion might improve the proposal enough to alleviate my concerns. Or discussion might show me that my concerns are baseless.
I sure hope EY isn’t deluded enough to think that initiatives like LW can be scaled up enough so as to improve the analytic capabilities of a sufficiently large fraction of mankind so that proposals like CEV will not encounter significant opposition.
That seems unlikely to help. Luddites have never had any power. Becoming a Luddite usually just makes you more xxxxxd.
What—not at all? You want the moon-onna-stick—so that goes into your “preliminary requirements” document?
Yes. Because there is always the possibility that some smart geek will say “‘moon-onna-stick’, huh? I bet I could do that. I see a clever trick.” Or maybe some other geek will say “Would you settle for Sputnik-on-a-stick?” and the User will say “Well, yes. Actually, that would be even better.”
At least that is what they preach in the Process books.
It sounds pretty surreal to me. I would usually favour some reality-imposed limits to fantasizing and wishful thinking from the beginning—unless there are practically no time constraints at all.
If there was ever any real chance of success, governments would be likely to object. Since they already have power, they are not going to want a bunch of geeks in a basement taking over the world with their intelligent machine—and redistributing all their assets for them.