...I also do not use “reasoning about idealized superintelligent systems as the method” of my agent foundations research. Certainly there are examples of this in agent foundations, but it is not the majority. It is not the majority of what Garrabrant or Demski or Ngo or Wentworth or Turner do, as far as I know.
It sounds to me like you’re not really familiar with the breadth of agent foundations. Which is perfectly fine, because it’s not a cohesive field yet, nor is the existing work easily understandable. But I think you should aim for your statements to be more calibrated.
...I also do not use “reasoning about idealized superintelligent systems as the method” of my agent foundations research. Certainly there are examples of this in agent foundations, but it is not the majority. It is not the majority of what Garrabrant or Demski or Ngo or Wentworth or Turner do, as far as I know.
It sounds to me like you’re not really familiar with the breadth of agent foundations. Which is perfectly fine, because it’s not a cohesive field yet, nor is the existing work easily understandable. But I think you should aim for your statements to be more calibrated.