I think it was wrong about the MtG post. I mostly think the negative effects of posting ideas (related to technical topics) that people think are bad is small enough to ignore, except in so far as it messes with my internal state. My system 2 thinks my system 1 is wrong about the external effects, but intends to cooperate with it anyway, because not cooperating with it could be internally bad.
As another example, months ago, you asked me to talk about how embedded agency fits in with the rest of AI safety, and I said something like that I didn’t want to force myself to make any public arguments for or against the usefulness of agent foundations. This is because I think research prioritization is especially prone to rationalization, so it is important to me that any thoughts about research prioritization are not pressured by downstream effects on what I am allowed to work on. (It still can change what I decide to work on, but only through channels that are entirely internal.)
I enjoyed the MtG post by the way. It was brief, and well illustrated. I haven’t seen other posts that talked about that many AI things on that level before. (On organizing approaches, as opposed to just focusing on one thing and all its details.)
I think it was wrong about the MtG post. I mostly think the negative effects of posting ideas (related to technical topics) that people think are bad is small enough to ignore, except in so far as it messes with my internal state. My system 2 thinks my system 1 is wrong about the external effects, but intends to cooperate with it anyway, because not cooperating with it could be internally bad.
As another example, months ago, you asked me to talk about how embedded agency fits in with the rest of AI safety, and I said something like that I didn’t want to force myself to make any public arguments for or against the usefulness of agent foundations. This is because I think research prioritization is especially prone to rationalization, so it is important to me that any thoughts about research prioritization are not pressured by downstream effects on what I am allowed to work on. (It still can change what I decide to work on, but only through channels that are entirely internal.)
I enjoyed the MtG post by the way. It was brief, and well illustrated. I haven’t seen other posts that talked about that many AI things on that level before. (On organizing approaches, as opposed to just focusing on one thing and all its details.)