If you dislike the idea of using the term “Schelling point” in this way, perhaps you could suggest a concise way of saying “choice that would naturally occur to people” to be used outside of specific game theory problems?
‘Low entropy’ is something I would very naturally use. Of course, this is also a technical term. :) I do have a precise meaning for it in my head—“Learning that the chosen group is the set of non-peace Nobel laureates does not give you that much more information in the sense of conditional entropy given that you already know that the group was to be chosen by humans for the purpose of CEV.”—but now that I think about it, that is quite inferentially far from “Non-peace Nobel laureates would be a low entropy group.”. In the context of LW, perhaps a level of detail between those two could avoid ambiguity.
Whether low entropy would be desirable in this context would depend on what you are trying to achieve. In its favour, it would be easier to justify to others as you mentioned, if that is a concern. Apart from that, I would think that the right solution is likely to be a simple one, but that looking for simple solutions is not the best way to go about finding it. Low entropy provides a bit of evidence for optimality, but you already have criteria that you want to maximize; it is better to analyze these criteria than to use a not-especially-good proxy for them, at least until you’ve hit diminishing returns with the analysis. Also, since you’re human, looking at candidate solutions can make your brain try to argue for or against them rather than getting a deeper understanding; that tends not to end well. Since you seem to be looking at this for the purpose of gathering support by avoiding a feeling of “This is the arbitrary whim of the AI designers.”, this isn’t really relevant to the point you were trying to make, but since I misinterpreted you initially, we get a bit more CEV analysis.
Okay, this definitely clears things up.
‘Low entropy’ is something I would very naturally use. Of course, this is also a technical term. :) I do have a precise meaning for it in my head—“Learning that the chosen group is the set of non-peace Nobel laureates does not give you that much more information in the sense of conditional entropy given that you already know that the group was to be chosen by humans for the purpose of CEV.”—but now that I think about it, that is quite inferentially far from “Non-peace Nobel laureates would be a low entropy group.”. In the context of LW, perhaps a level of detail between those two could avoid ambiguity.
Whether low entropy would be desirable in this context would depend on what you are trying to achieve. In its favour, it would be easier to justify to others as you mentioned, if that is a concern. Apart from that, I would think that the right solution is likely to be a simple one, but that looking for simple solutions is not the best way to go about finding it. Low entropy provides a bit of evidence for optimality, but you already have criteria that you want to maximize; it is better to analyze these criteria than to use a not-especially-good proxy for them, at least until you’ve hit diminishing returns with the analysis. Also, since you’re human, looking at candidate solutions can make your brain try to argue for or against them rather than getting a deeper understanding; that tends not to end well. Since you seem to be looking at this for the purpose of gathering support by avoiding a feeling of “This is the arbitrary whim of the AI designers.”, this isn’t really relevant to the point you were trying to make, but since I misinterpreted you initially, we get a bit more CEV analysis.