If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.
If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they’re not super-safety oriented, and there’s an AI research community who are not taking the risk seriously.
They could be dangerously deluded, for example, even if their aim is right. Currently, I don’t believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.