My guess is there is no confusion about this, but to be clear, I didn’t intend to speak on behalf of the AI 2027 team. Indeed, it’s plausible to me they disagree with it, though my honest belief in that case is that they are confused about the sources of their own beliefs, not that my statement is wrong. I.e. I said:
Ideally the research supplements would say something like that at the top, though it’s plausible that some of the AI Futures Project team relate to their epistemic process differently (though if they do, I think they are just kind of confused).
My guess is there is no confusion about this, but to be clear, I didn’t intend to speak on behalf of the AI 2027 team. Indeed, it’s plausible to me they disagree with it, though my honest belief in that case is that they are confused about the sources of their own beliefs, not that my statement is wrong. I.e. I said: