since the forecast did end up as good propaganda if nothing else
Just responding to this local comment you made: I think it’s wrong to make “propaganda” to reach end Y, even if you think end Y is important. If you have real reasons for believing something will happen, you shouldn’t have to lie, exaggerate, or otherwise mislead your audience to make them believe it, too.
So I’m arguing that you shouldn’t have mixed feelings because ~”it was valuable propaganda at least.” Again, not trying to claim that AI 2027 “lied”—just replying to the quoted bit of reasoning.
I phrased that badly/compressed too much. The background feeling there was that my critique may be of an overly nitpicky type that no normal person would care about, but the act-of-critiquing was still an attack on the report if viewed through the lens of a social-status game, which may (on the margins) unfairly bias someone against the report.
Like, by analogy, imagine a math paper involving a valid but hard-to-follow proof of some conjecture that for some reason gets tons of negative attention due to bad formatting. This may incorrectly taint the core message by association, even though it’s completely valid.
Just responding to this local comment you made: I think it’s wrong to make “propaganda” to reach end Y, even if you think end Y is important. If you have real reasons for believing something will happen, you shouldn’t have to lie, exaggerate, or otherwise mislead your audience to make them believe it, too.
So I’m arguing that you shouldn’t have mixed feelings because ~”it was valuable propaganda at least.” Again, not trying to claim that AI 2027 “lied”—just replying to the quoted bit of reasoning.
I phrased that badly/compressed too much. The background feeling there was that my critique may be of an overly nitpicky type that no normal person would care about, but the act-of-critiquing was still an attack on the report if viewed through the lens of a social-status game, which may (on the margins) unfairly bias someone against the report.
Like, by analogy, imagine a math paper involving a valid but hard-to-follow proof of some conjecture that for some reason gets tons of negative attention due to bad formatting. This may incorrectly taint the core message by association, even though it’s completely valid.