(I realise everything I’m commenting seems like a nitpick and I do think that what you’ve written is interesting and useful, I just don’t have anything constructive to add on that side of things)
I don’t like litigating via quotes, but:
I haven’t said, and I don’t think, that the majority of markets and prediction sites get this wrong.
and
More than that, I think the balance should be better and more explicitly be addressed.
I read the bit I’ve emphasised as saying “prediction sites have got this balance wrong” contradicting your comment saying you think they have it right.
Still, excessive focus on making rules on the front end, especially for longer-term questions and ones where the contours are unclear, rather than explicitly being adaptive, is not universally helpful.
I think it’s really hard for this adaptive approach to work when there’s more than a small group of like minded people involved in a forecast. (This is related to my final point):
1) When the wording of a question seems ambiguous, the intent should be an overriding reason to choose an interpretation. 2) When the wording of a question is clear, the intent shouldn’t change the resolution.
The problem for me (with this) is what is “clear” for some people is not clear for others. To give one example of this, the language in this question was completely unambiguous to me (and it’s author) but another predictor found it unclear. (I don’t think this is a particularly good example, but it’s just one which I thought of when trying to think of an example of where some people thought something was ambiguous and some people didn’t).
re: “Get this wrong” versus “the balance should be better,” there are two different things that are being discussed. The first is about defining individual questions via clear resolution criteria, which I think is doe well, and the second is about defining clear principles that provide context and inform what types of questions and resolution criteria are considered good form.
A question like “will Democrats pass H.R.2280 and receive 51 votes in the Senate” is very well defined, but super-narrow, and easily resolved “incorrectly” if the bill is incorporated into another bill, or if an adapted bill is proposed by a moderate Republican and passes instead, or passed via some other method, or if it passes but gets vetoed by Biden. But it isn’t an unclear question, and given the current way that Metaculus is run, would probably be the best way of phrasing the question. Still, it’s a sub-par question, given the principles I mentioned. A better one would be “Will a bill such as H.R.2280 limiting or banning straw purchases of firearms be passed by the current Congress and enacted?” It’s much less well defined, but the boundaries are very different. It also uses “passed” and “enacted”, which have gray areas. At the same time, the failure modes are closer to the ones that we care about near the boundary of the question. However, given the current system, this question is obviously worse—it’s harder to resolve, it’s more likely to be ambiguous because a bill that does only some of the thing we care about is passed, etc.
Still, I agree that the boundaries here are tricky, and I’d love to think more about how to do this better.
(I realise everything I’m commenting seems like a nitpick and I do think that what you’ve written is interesting and useful, I just don’t have anything constructive to add on that side of things)
I don’t like litigating via quotes, but:
and
I read the bit I’ve emphasised as saying “prediction sites have got this balance wrong” contradicting your comment saying you think they have it right.
I think it’s really hard for this adaptive approach to work when there’s more than a small group of like minded people involved in a forecast. (This is related to my final point):
The problem for me (with this) is what is “clear” for some people is not clear for others. To give one example of this, the language in this question was completely unambiguous to me (and it’s author) but another predictor found it unclear. (I don’t think this is a particularly good example, but it’s just one which I thought of when trying to think of an example of where some people thought something was ambiguous and some people didn’t).
re: “Get this wrong” versus “the balance should be better,” there are two different things that are being discussed. The first is about defining individual questions via clear resolution criteria, which I think is doe well, and the second is about defining clear principles that provide context and inform what types of questions and resolution criteria are considered good form.
A question like “will Democrats pass H.R.2280 and receive 51 votes in the Senate” is very well defined, but super-narrow, and easily resolved “incorrectly” if the bill is incorporated into another bill, or if an adapted bill is proposed by a moderate Republican and passes instead, or passed via some other method, or if it passes but gets vetoed by Biden. But it isn’t an unclear question, and given the current way that Metaculus is run, would probably be the best way of phrasing the question. Still, it’s a sub-par question, given the principles I mentioned. A better one would be “Will a bill such as H.R.2280 limiting or banning straw purchases of firearms be passed by the current Congress and enacted?” It’s much less well defined, but the boundaries are very different. It also uses “passed” and “enacted”, which have gray areas. At the same time, the failure modes are closer to the ones that we care about near the boundary of the question. However, given the current system, this question is obviously worse—it’s harder to resolve, it’s more likely to be ambiguous because a bill that does only some of the thing we care about is passed, etc.
Still, I agree that the boundaries here are tricky, and I’d love to think more about how to do this better.