I vaguely remember Eliezer saying that something is on topic at LessWrong if it improves the chances of a positive singularity (but a quick Google search couldn’t locate that). I would assume that this applies to meetups, and metadiscussions of meetups. So I would appeal to everyone to put into place policies that would help the SIAI, whatever those may be. If minimizing “creepy” behavior maximizes the chance that the SIAI is successful in its mission, then I want policies that minimize incidence of creepy behavior, even if this is simply an excuse to exclude low status persons. If not minimizing “creepy” behavior maximizes the chance that the SIAI is successful in its mission, then I don’t want policies that minimize creepy behavior, even if this results in unpleasant situations for some persons.
The positive utility from FAI dominates any short-term local gains from {treating low status males fairly, ensuring that females feel welcome}. I don’t know exactly how one would argue either way, but people aren’t even addressing the consequentialist point. My immediate naive idea would be to calculate whether ( ) is greater or less than ( ), but I have no idea how to even estimate some of those variables, nor am I sure if solving said inequality is the right frame to think in.
I vaguely remember Eliezer saying that something is on topic at LessWrong if it improves the chances of a positive singularity (but a quick Google search couldn’t locate that). I would assume that this applies to meetups, and metadiscussions of meetups. So I would appeal to everyone to put into place policies that would help the SIAI, whatever those may be. If minimizing “creepy” behavior maximizes the chance that the SIAI is successful in its mission, then I want policies that minimize incidence of creepy behavior, even if this is simply an excuse to exclude low status persons. If not minimizing “creepy” behavior maximizes the chance that the SIAI is successful in its mission, then I don’t want policies that minimize creepy behavior, even if this results in unpleasant situations for some persons.
The positive utility from FAI dominates any short-term local gains from {treating low status males fairly, ensuring that females feel welcome}. I don’t know exactly how one would argue either way, but people aren’t even addressing the consequentialist point. My immediate naive idea would be to calculate whether ( ) is greater or less than ( ), but I have no idea how to even estimate some of those variables, nor am I sure if solving said inequality is the right frame to think in.
How might a lack of women affect the ability to compute CEV?