I think he’s clearly had a narrative he wanted to spin and he’s being very defensive here.
If I wanted to steelman his position, I would do so as follows (low-confidence and written fairly quickly):
I expect he believes his framing and that he feels fairly confident in it because most of the people he respects also adopt this framing.
In so far as his own personal views make it into the article, I expect he believes that he’s engaging in a socially acceptable amount of editorializing. In fact, I expect he believes that editorializing the article in this way is more socially responsible than not, likely due to the role of journalism being something along the lines of “critiquing power”.
Further, whilst I expect he wouldn’t universally endorse “being socially acceptable among journalists” as guaranteeing that something is moral, he’d likely defend it as a strongly reliable heuristic, such that it would take pretty strong arguments to justify departing from this.
Whilst he likely endorses some degree of objectivity (in terms of getting facts correct), I expect that he also sees neutrality as overrated by old school journalists. I expect he believes that it limits the ability of jouralists to steer the world towards positive outcomes. That is, more of as a consideration that can be overriden, rather than a rule.
Great post!
I really appreciate proposals that are both pragmatic and ambitious; and this post is both!
I guess the closest thing there is to a CEA for AI Safety is Kairos. However, they decided to focus explicitly on student groups[1].
SPAR isn’t limited to students, but it is very much in line with this by providing, “research mentorship for early-career individuals in AI safety”.