“Why X should care about Y” is a really ambiguous topic. I presume you’re not asking “why should everyone care about rationality and about EA”, but I can’t tell what information you’re seeking. For clarity, are you asking “how can we increase the overlap”, or “should we increase the overlap”, or something else?
I’m an agent who believes that rationality helps me identify and achieve goals. I care about rationality as a tool. I also prefer that more agents be happier, and EA seems to be one mechanism to pursue that goal. I don’t consider myself a constituent of either “community”, but rather a consumer of (and occasionally a contributor to) the thinking and philosophy of each.
This seems more a comment than an answer to me. I think it’s worth asking this about how I framed the question and why (reason: I see a lot of overlap between these two movements, so it seems worth asking “why does this overlap exist?”), but should happen in a comment rather than in an answer.
“Why X should care about Y” is a really ambiguous topic. I presume you’re not asking “why should everyone care about rationality and about EA”, but I can’t tell what information you’re seeking. For clarity, are you asking “how can we increase the overlap”, or “should we increase the overlap”, or something else?
I’m an agent who believes that rationality helps me identify and achieve goals. I care about rationality as a tool. I also prefer that more agents be happier, and EA seems to be one mechanism to pursue that goal. I don’t consider myself a constituent of either “community”, but rather a consumer of (and occasionally a contributor to) the thinking and philosophy of each.
This seems more a comment than an answer to me. I think it’s worth asking this about how I framed the question and why (reason: I see a lot of overlap between these two movements, so it seems worth asking “why does this overlap exist?”), but should happen in a comment rather than in an answer.