Does the fact that it would be a candidate for a Schelling point in a coordination-game-ified version of this problem constitute a reason that choosing it would be desirable
Yes, in the sense that if it is a Schelling point then it seems less arbitrary in comparison to a group that hardly anyone would think of suggesting. It may be the case that “group X” is a more ideal group of people to participate in a selective CEV than Nobel laureates—but to the vast majority of people, this will seem like a totally arbitrary choice, therefore proponents are likely to get bogged down justifying it.
If you dislike the idea of using the term “Schelling point” in this way, perhaps you could suggest a concise way of saying “choice that would naturally occur to people” to be used outside of specific game theory problems?
I do recognise your objection and will try to avoid using it in this sense in future.
If you dislike the idea of using the term “Schelling point” in this way, perhaps you could suggest a concise way of saying “choice that would naturally occur to people” to be used outside of specific game theory problems?
‘Low entropy’ is something I would very naturally use. Of course, this is also a technical term. :) I do have a precise meaning for it in my head—“Learning that the chosen group is the set of non-peace Nobel laureates does not give you that much more information in the sense of conditional entropy given that you already know that the group was to be chosen by humans for the purpose of CEV.”—but now that I think about it, that is quite inferentially far from “Non-peace Nobel laureates would be a low entropy group.”. In the context of LW, perhaps a level of detail between those two could avoid ambiguity.
Whether low entropy would be desirable in this context would depend on what you are trying to achieve. In its favour, it would be easier to justify to others as you mentioned, if that is a concern. Apart from that, I would think that the right solution is likely to be a simple one, but that looking for simple solutions is not the best way to go about finding it. Low entropy provides a bit of evidence for optimality, but you already have criteria that you want to maximize; it is better to analyze these criteria than to use a not-especially-good proxy for them, at least until you’ve hit diminishing returns with the analysis. Also, since you’re human, looking at candidate solutions can make your brain try to argue for or against them rather than getting a deeper understanding; that tends not to end well. Since you seem to be looking at this for the purpose of gathering support by avoiding a feeling of “This is the arbitrary whim of the AI designers.”, this isn’t really relevant to the point you were trying to make, but since I misinterpreted you initially, we get a bit more CEV analysis.
Yes, in the sense that if it is a Schelling point then it seems less arbitrary in comparison to a group that hardly anyone would think of suggesting. It may be the case that “group X” is a more ideal group of people to participate in a selective CEV than Nobel laureates—but to the vast majority of people, this will seem like a totally arbitrary choice, therefore proponents are likely to get bogged down justifying it.
If you dislike the idea of using the term “Schelling point” in this way, perhaps you could suggest a concise way of saying “choice that would naturally occur to people” to be used outside of specific game theory problems?
I do recognise your objection and will try to avoid using it in this sense in future.
Okay, this definitely clears things up.
‘Low entropy’ is something I would very naturally use. Of course, this is also a technical term. :) I do have a precise meaning for it in my head—“Learning that the chosen group is the set of non-peace Nobel laureates does not give you that much more information in the sense of conditional entropy given that you already know that the group was to be chosen by humans for the purpose of CEV.”—but now that I think about it, that is quite inferentially far from “Non-peace Nobel laureates would be a low entropy group.”. In the context of LW, perhaps a level of detail between those two could avoid ambiguity.
Whether low entropy would be desirable in this context would depend on what you are trying to achieve. In its favour, it would be easier to justify to others as you mentioned, if that is a concern. Apart from that, I would think that the right solution is likely to be a simple one, but that looking for simple solutions is not the best way to go about finding it. Low entropy provides a bit of evidence for optimality, but you already have criteria that you want to maximize; it is better to analyze these criteria than to use a not-especially-good proxy for them, at least until you’ve hit diminishing returns with the analysis. Also, since you’re human, looking at candidate solutions can make your brain try to argue for or against them rather than getting a deeper understanding; that tends not to end well. Since you seem to be looking at this for the purpose of gathering support by avoiding a feeling of “This is the arbitrary whim of the AI designers.”, this isn’t really relevant to the point you were trying to make, but since I misinterpreted you initially, we get a bit more CEV analysis.