I agree that some of our disagreements “come down to relatively deep worldview differences (related to the debate over ‘Pascal’s Mugging’).” The forthcoming post on this subject by Steven Kaas may be a good place to engage further on this matter.
I retain the claim that Holden’s “objection #1 punts to objection #2.” For the moment, we seem to be talking past each other on this point. The reply Eliezer and I gave on Tool AI was not just that Tool AI has its own safety concerns, but also that understanding the tool AI approach and other possible approaches to the AGI safety problem are part of what an “FAI Programmer” does. We understand why people have gotten the impression that SI’s FAI team is specifically about building a “self-improving CEV-maximizing agent”, but that’s just one approach under consideration, and figuring out which approach is best requires the kind of expertise that SI aims to host.
The evidence suggesting that rationality is a weak predictor of success comes from studies on privileged Westerners. Perhaps Holden has a different notion of what counts as a measure of rationality than the ones currently used by psychologists?
I’ve looked further into donor advised funds and now agree that the institutions named by Holden are unlikely to overrule their client’s wishes.
I, too, would be curious to hear Holden’s response to Wei Dai’s question.
On the question of the impact of rationality, my guess is that:
Luke, Holden, and most psychologists agree that rationality means something roughly like the ability to make optimal decisions given evidence and goals.
The main strand of rationality research followed by both psychologists and LWers has been focused on fairly obvious cognitive biases. (For short, let’s call these “cognitive biases”.)
Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational. For example, it’s very clear that spinning a wheel should not affect people’s estimates of how many African countries are in the UN. But do you know anyone for whom this sort of thing is really their biggest problem?
Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them. These are the tests used in the studies psychologists have done on whether rationality predicts success.
LW readers tend to be fairly good at avoiding cognitive biases (and will be even better if CFAR takes off).
But there are a whole series of much more important irrationalities that LWers suffer from. (Let’s call them “practical biases” as opposed to “cognitive biases”, even though both are ultimately practical and cognitive.)
Holden is unusually good at avoiding these sorts of practical biases. (I’ve found Ray Dalio’s “Principles”, written by Holden’s former employer, an interesting document on practical biases, although it also has a lot of stuff I disagree with or find silly.)
Holden’s superiority at avoiding practical biases is a big part of why GiveWell has tended to be more successful than SIAI. (Givewell.org has around 30x the amount of traffic as Singularity.org according to Compete.com and my impression is that it moves several times as much money, although I can’t find a 2011 fundraising total for SIAI.)
Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.
Just a few thoughts for now:
I agree that some of our disagreements “come down to relatively deep worldview differences (related to the debate over ‘Pascal’s Mugging’).” The forthcoming post on this subject by Steven Kaas may be a good place to engage further on this matter.
I retain the claim that Holden’s “objection #1 punts to objection #2.” For the moment, we seem to be talking past each other on this point. The reply Eliezer and I gave on Tool AI was not just that Tool AI has its own safety concerns, but also that understanding the tool AI approach and other possible approaches to the AGI safety problem are part of what an “FAI Programmer” does. We understand why people have gotten the impression that SI’s FAI team is specifically about building a “self-improving CEV-maximizing agent”, but that’s just one approach under consideration, and figuring out which approach is best requires the kind of expertise that SI aims to host.
The evidence suggesting that rationality is a weak predictor of success comes from studies on privileged Westerners. Perhaps Holden has a different notion of what counts as a measure of rationality than the ones currently used by psychologists?
I’ve looked further into donor advised funds and now agree that the institutions named by Holden are unlikely to overrule their client’s wishes.
I, too, would be curious to hear Holden’s response to Wei Dai’s question.
On the question of the impact of rationality, my guess is that:
Luke, Holden, and most psychologists agree that rationality means something roughly like the ability to make optimal decisions given evidence and goals.
The main strand of rationality research followed by both psychologists and LWers has been focused on fairly obvious cognitive biases. (For short, let’s call these “cognitive biases”.)
Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational. For example, it’s very clear that spinning a wheel should not affect people’s estimates of how many African countries are in the UN. But do you know anyone for whom this sort of thing is really their biggest problem?
Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them. These are the tests used in the studies psychologists have done on whether rationality predicts success.
LW readers tend to be fairly good at avoiding cognitive biases (and will be even better if CFAR takes off).
But there are a whole series of much more important irrationalities that LWers suffer from. (Let’s call them “practical biases” as opposed to “cognitive biases”, even though both are ultimately practical and cognitive.)
Holden is unusually good at avoiding these sorts of practical biases. (I’ve found Ray Dalio’s “Principles”, written by Holden’s former employer, an interesting document on practical biases, although it also has a lot of stuff I disagree with or find silly.)
Holden’s superiority at avoiding practical biases is a big part of why GiveWell has tended to be more successful than SIAI. (Givewell.org has around 30x the amount of traffic as Singularity.org according to Compete.com and my impression is that it moves several times as much money, although I can’t find a 2011 fundraising total for SIAI.)
lukeprog has been better at avoiding practical biases than previous SIAI leadership and this is a big part of why SIAI is improving. (See, e.g., lukeprog’s debate with EY about simply reading Nonprofit Kit for Dummies.)
Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.
For the record, I basically agree with all this.