I don’t see how this is responding to anything I’ve said? What in my comment are you disagreeing with or adding color to?
Again, my position is not “AI 2027 did something bad”. My position is “stop critiquing people for having goals around status and prestige rather than epistemics, or at least do so consistently”.
(Incidentally, I suspect bio anchors did better on the axis of getting good reviews / feedback, but that isn’t particularly central to anything I’m claiming.)
For example, titotal’s critique was posted on the EA Forum / LessWrong, and focused on technical disagreement
And I was saying that this is also true for the early drafts of AI 2027. Only after a long discussion of the technical disagreements did it go on to a huge amplification thing. This seems directly relevant to that section.
I am responding to the part about consistent standards. I don’t really understand what you believe here, clearly you care a lot about people not using lots of rhetorical tricks and adversarial persuasion tactics all the time, and we’ve talked about that in the past, so I am just straightforwardly arguing that on those dimensions titotal’s post was much worse compared to AI 2027.
We don’t need to come to agreement on this part, it does seem kind of hard to evaluate. But in as much as your top level comment is arguing some kind of asymmetric standard is being applied, that just seems super wrong to me. I don’t know where I would put the line of encourage/discourage, but I don’t see any inconsistency in being unhappy with what titotal is doing and happy about what AI 2027 is doing.
I don’t see any inconsistency in being unhappy with what titotal is doing and happy about what AI 2027 is doing.
I agree with this. I was responding pretty specifically to Zvi’s critique in particular, which is focusing on things like the use of the word “bad” and the notion that there could be a goal to lower the status and prestige of AI 2027. If instead the critique was about e.g. norms of intellectual discourse I’d be on board.
That said I don’t feel like your defense feels all that strong to me? I’m happy to take your word for it that there was lots of review of AI 2027, but my understanding is that titotal also engaged quite a lot with the authors of AI 2027 before publishing the post? (I definitely expect it was much lower engagement / review in an absolute sense, but everything about it is going to be much lower in an absolute sense, since it is not as big a project.)
If I had to guess at the difference between us, it would be that I primarily see emotionally gripping storytelling as a symmetric weapon to be regarded with suspicion by default, whereas you primarily view it as an important and valuable way to get people to really engage with a topic. (Though admittedly on this view I can’t quite see why you’d object to describing a model as “bad”, since that also seems like a way to get people to better engage with a topic.) Or possibly it’s more salient to me how the storytelling in the finished AI 2027 product comes across since I wasn’t involved in its creation, whereas to you the research and analysis is more salient.
Anyway it doesn’t seem super worth digging to the bottom of this, seems reasonable to leave it here (though I would be interested in any reactions you have if you felt like writing them).
EDIT: Actually looking at the other comments here I think it’s plausible that a lot of the difference is in creators thinking the point of AI 2027 was the scenario whereas the public reception was much more about timelines. I feel like it was very predictable that public reception would focus a lot on the timeline, but perhaps this would have been less clear in advance. Though looking at Scott’s post, the timeline is really quite central to the presentation, so I don’t feel like this can really be a surprise.
I don’t see how this is responding to anything I’ve said? What in my comment are you disagreeing with or adding color to?
Again, my position is not “AI 2027 did something bad”. My position is “stop critiquing people for having goals around status and prestige rather than epistemics, or at least do so consistently”.
(Incidentally, I suspect bio anchors did better on the axis of getting good reviews / feedback, but that isn’t particularly central to anything I’m claiming.)
I was responding to this part:
And I was saying that this is also true for the early drafts of AI 2027. Only after a long discussion of the technical disagreements did it go on to a huge amplification thing. This seems directly relevant to that section.
I am responding to the part about consistent standards. I don’t really understand what you believe here, clearly you care a lot about people not using lots of rhetorical tricks and adversarial persuasion tactics all the time, and we’ve talked about that in the past, so I am just straightforwardly arguing that on those dimensions titotal’s post was much worse compared to AI 2027.
We don’t need to come to agreement on this part, it does seem kind of hard to evaluate. But in as much as your top level comment is arguing some kind of asymmetric standard is being applied, that just seems super wrong to me. I don’t know where I would put the line of encourage/discourage, but I don’t see any inconsistency in being unhappy with what titotal is doing and happy about what AI 2027 is doing.
I agree with this. I was responding pretty specifically to Zvi’s critique in particular, which is focusing on things like the use of the word “bad” and the notion that there could be a goal to lower the status and prestige of AI 2027. If instead the critique was about e.g. norms of intellectual discourse I’d be on board.
That said I don’t feel like your defense feels all that strong to me? I’m happy to take your word for it that there was lots of review of AI 2027, but my understanding is that titotal also engaged quite a lot with the authors of AI 2027 before publishing the post? (I definitely expect it was much lower engagement / review in an absolute sense, but everything about it is going to be much lower in an absolute sense, since it is not as big a project.)
If I had to guess at the difference between us, it would be that I primarily see emotionally gripping storytelling as a symmetric weapon to be regarded with suspicion by default, whereas you primarily view it as an important and valuable way to get people to really engage with a topic. (Though admittedly on this view I can’t quite see why you’d object to describing a model as “bad”, since that also seems like a way to get people to better engage with a topic.) Or possibly it’s more salient to me how the storytelling in the finished AI 2027 product comes across since I wasn’t involved in its creation, whereas to you the research and analysis is more salient.
Anyway it doesn’t seem super worth digging to the bottom of this, seems reasonable to leave it here (though I would be interested in any reactions you have if you felt like writing them).
EDIT: Actually looking at the other comments here I think it’s plausible that a lot of the difference is in creators thinking the point of AI 2027 was the scenario whereas the public reception was much more about timelines. I feel like it was very predictable that public reception would focus a lot on the timeline, but perhaps this would have been less clear in advance. Though looking at Scott’s post, the timeline is really quite central to the presentation, so I don’t feel like this can really be a surprise.