I’m probably missing something important. Could someone please point it out?
That most people, historically, have been morons.
Basically the same question: Why are you limited to humans? Even supposing you could make a clean evolutionary cutoff (no one before Adam gets to vote), is possessing a particular set of DNA really an objective criterion for having a single vote on the fate of the universe?
There is no truly objective criterion for such decisionmaking, or at least none that you would consider fair or interesting in the least. The criterion is going to have to depend on human values, for the obvious reason that humans are the agents who get to decide what happens now (and yes, they could well decide that other agents get a vote too).
It’s not a matter of votes so much as veto power. CEV is the one where everybody, or at least their idealized version of themselves, gets a vote. In my plan, not everybody gets everything they want. The AI just says “I’ve thought it through, and this is how things are going to go,” then provides complete and truthful answers to any legitimate question you care to ask. Anything you don’t like about the plan, when investigated further, turns out to be either a misunderstanding on your part or a necessary consequence of some other feature that, once you think about it, is really more important.
Yes, most people historically have been morons. Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world? Would you be happy with any AI that had equivalent degree of contempt for lesser beings?
There’s no particular need to limit it to humans, it’s just that humans have the most complicated requirements. If you want to add a few more orders of magnitude to the processing time and set aside a few planets just to make sure that everything macrobiotic has it’s own little happy hunting ground, go ahead.
Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world?
Your scheme requires that the morons can be convinced of the correctness of the AI’s view by argumentation. If your scheme requires all humans to be perfect reasoners, you should mention that up front.
That most people, historically, have been morons.
Basically the same question: Why are you limited to humans? Even supposing you could make a clean evolutionary cutoff (no one before Adam gets to vote), is possessing a particular set of DNA really an objective criterion for having a single vote on the fate of the universe?
There is no truly objective criterion for such decisionmaking, or at least none that you would consider fair or interesting in the least. The criterion is going to have to depend on human values, for the obvious reason that humans are the agents who get to decide what happens now (and yes, they could well decide that other agents get a vote too).
It’s not a matter of votes so much as veto power. CEV is the one where everybody, or at least their idealized version of themselves, gets a vote. In my plan, not everybody gets everything they want. The AI just says “I’ve thought it through, and this is how things are going to go,” then provides complete and truthful answers to any legitimate question you care to ask. Anything you don’t like about the plan, when investigated further, turns out to be either a misunderstanding on your part or a necessary consequence of some other feature that, once you think about it, is really more important.
Yes, most people historically have been morons. Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world? Would you be happy with any AI that had equivalent degree of contempt for lesser beings?
There’s no particular need to limit it to humans, it’s just that humans have the most complicated requirements. If you want to add a few more orders of magnitude to the processing time and set aside a few planets just to make sure that everything macrobiotic has it’s own little happy hunting ground, go ahead.
Your scheme requires that the morons can be convinced of the correctness of the AI’s view by argumentation. If your scheme requires all humans to be perfect reasoners, you should mention that up front.