‘if you record any prediction anywhere other than Metaculus (that doesn’t have similarly good tools for representing probability distributions), you’re a con artist’. Seems way too extreme.
No, I don’t mean the distinguishing determinant of con-artist or not-con-artist trait is whether it’s recorded on Metaculus. It’s mentioned in that tweet because if you’re going to bother doing it, might as well go all the way and show a distribution.
But even if he just posted a confidence interval, on some site other than Metaculus, that would be a huge upgrade. Because then anyone could add it to a spreadsheet scorable forecasts, and reconstruct it without too much effort.
‘if you record any prediction anywhere other than Metaculus (that doesn’t have similarly good tools for representing probability distributions), you’re a con artist’. Seems way too extreme.
No, that’s not what I’m saying. The main thing is that they be scorable. But if someone is going to do it at all, then doing it on Metaculus just makes more sense—the administrative work is already taken care of, and there’s no risk of cherry-picking nor omission.
Also, from another reply you gave:
Also, I think you said on Twitter that Eliezer’s a liar unless he generates some AI prediction that lets us easily falsify his views in the near future? Which seems to require that he have very narrow confidence intervals about very near-term events in AI.
I never used the term “liar”. The thing he’s doing that I think is bad is more like what a pundit does, like the guy who calls recessions, a sort of epistemic conning. “Lying” is different, at least to me.
More importantly, no he doesn’t necessarily need to have really narrow distributions, and I don’t know why you think this. Only if he was squashed close against the “Now” side on the chart, then yes it would be “narrower”—but if that’s what Eliezer thinks, if he’s saying himself it’s earlier than x date, then on a graph that looks like it’s a bit narrower and shifted to the left, and it simply reflects what he believes.
There’s nothing about how we score forecasters that requires him have “very narrow” confidence intervals about very near-term events in AI, in order to measure alpha. To help me understand, can you describe why you think this? Why don’t you think alpha would start being measurable with merely slightly more narrow confidence intervals than the community, and centered closer to the actual outcome?
EDIT a week later: I have decided that several of your misunderstandings should be considered strawmanning, and I’ve switched from upvoting some of your comments here to downvoting them.
Some more misunderstanding:
No, I don’t mean the distinguishing determinant of con-artist or not-con-artist trait is whether it’s recorded on Metaculus. It’s mentioned in that tweet because if you’re going to bother doing it, might as well go all the way and show a distribution.
But even if he just posted a confidence interval, on some site other than Metaculus, that would be a huge upgrade. Because then anyone could add it to a spreadsheet scorable forecasts, and reconstruct it without too much effort.
No, that’s not what I’m saying. The main thing is that they be scorable. But if someone is going to do it at all, then doing it on Metaculus just makes more sense—the administrative work is already taken care of, and there’s no risk of cherry-picking nor omission.
Also, from another reply you gave:
I never used the term “liar”. The thing he’s doing that I think is bad is more like what a pundit does, like the guy who calls recessions, a sort of epistemic conning. “Lying” is different, at least to me.
More importantly, no he doesn’t necessarily need to have really narrow distributions, and I don’t know why you think this. Only if he was squashed close against the “Now” side on the chart, then yes it would be “narrower”—but if that’s what Eliezer thinks, if he’s saying himself it’s earlier than x date, then on a graph that looks like it’s a bit narrower and shifted to the left, and it simply reflects what he believes.
There’s nothing about how we score forecasters that requires him have “very narrow” confidence intervals about very near-term events in AI, in order to measure alpha. To help me understand, can you describe why you think this? Why don’t you think alpha would start being measurable with merely slightly more narrow confidence intervals than the community, and centered closer to the actual outcome?
EDIT a week later: I have decided that several of your misunderstandings should be considered strawmanning, and I’ve switched from upvoting some of your comments here to downvoting them.