Based on my reading of the post it seemed to me that you were concerned primarily with info-hazard risks in ML research, not AI research in general; maybe it’s the way you framed it that I took it to be contingent on ML mattering.
I meant it to be about all AI research. I don’t usually make too much effort to distinguish ML and AI, TBH.
Based on my reading of the post it seemed to me that you were concerned primarily with info-hazard risks in ML research, not AI research in general; maybe it’s the way you framed it that I took it to be contingent on ML mattering.
I meant it to be about all AI research. I don’t usually make too much effort to distinguish ML and AI, TBH.