This spin-off makes sense: The SIAI’s goal is not improving human rationality. The SIAI’s goal is to try to make sure that if a Singularity occurs that it is one that doesn’t destroy humanity or change us into something completely counter to what we want.
This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don’t necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality’s status than to raise the status of singularity beliefs.
From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.
This spin-off makes sense: The SIAI’s goal is not improving human rationality. The SIAI’s goal is to try to make sure that if a Singularity occurs that it is one that doesn’t destroy humanity or change us into something completely counter to what we want.
This is not the same thing as improving human rationality. The vast majority of humans will do absolutely nothing connected to AI research. Improving their rationality is a great goal, and probably has a high pay-off. But it is not the goal of the SIAI. When people give money to the SIAI they expect that money to go towards AI research and related issues, including the summits. Moreover, many people who are favorable to rational thinking don’t necessarily see a singularity type event as at all likely. Many even in the more sane end of the internet (e.g. the atheist and skeptics movements) consider it to be one more fringe belief, associating it with careful rational thinking is more likely to bring down LW-style rationality’s status than to raise the status of singularity beliefs.
From my own perspective, as someone who agrees with a lot of the rationality, considers a fast hard-take off of AI to be unlikely, but thinks that it is likely enough that someone should be paying attention to it, this seems like a good strategy.