>Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.
This is hindsight bias. Eliezer gives this example because it’s an example which happened to work.
But the relevant question is not “would immodesty, in this cherry-picked case, produce the right result”, but “would immodesty, when applied to many cases whose truth value you don’t know about in advance, produce the right result”. The procedure that has the greatest chance of working overall might fail in this particular case.
There are all sorts of things which can help you in a cherry-picked case subject to hindsight bias and availability bias, which are bad overall. There are automobile accidents where people were saved by not having seatbelts, but it would be dumb to point to one of those and use it as justification for a policy of not wearing a seatbelt.
“Hindsight bias” seems like the wrong term, unless you’re claiming that Eliezer was much less confident beforehand that this experiment would work than he sounds; but the thing you’re saying in the rest of your comment is just that the example is cherry-picked and might be unrepresentative, regardless of how confident Eliezer happened to be that “more light” would work. When Eliezer introduced the Bank of Japan example as well as the SAD example, he explicitly said that both were “cherry-picked,” so I think it’s good that you’re pointing this out in case readers forget.
There are different ways in which the example might be unrepresentative, and if you do think that’s the case, I think it would be helpful to explicitly state how you’d expect it to be unrepresentative. A few examples:
“SAD research is generally on the ball, and this is a weird exception where researchers happened to have a blind spot.”
“Most medical research is dramatically better than research on depressive disorders, so Eliezer got lucky by having a problem that fell in the depression category.”
“Medical research is particularly dysfunctional in ways that make it easy to outperform in this way, but this is an anomaly and isn’t something you can expect to do in areas outside of medical research.”
“Higher-intensity, higher-duration artificial illumination is probably useless for treating nearly all cases of SAD, and this is probably well-known to SAD researchers, but Brienne happened to be suffering from a very atypical case of SAD that does respond well to extra artificial light.”
I’m not saying you need to have a settled view on what the right answer is; I’m just curious very roughly what kinds of explanations you think are relatively likely, versus relatively unlikely.
Quoting Less Wrong wiki: ” Hindsight bias is a tendency to overestimate the foreseeability of events that have actually happened. I.e., subjects given information about X, and asked to assign a probability that X will happen, assign much lower probabilities than subjects who are given the same information about X, are told that X actually happened, and asked to estimate the foreseeable probability of X. ”
Eliezer claims that he knows better than the experts. The event being foreseen is “my claim to know better than the experts pans out”. He’s pointing to a single instance of that, where it did indeed pan out, and using it to suggest that the event is relatively likely to happen in general. That’s a form of hindsight bias.
“I think it would be helpful to explicitly state how you’d expect it to be unrepresentative. ”
We know that there are areas where Eliezer claims to know better than the experts. We also know that the most prominent ones of those are not medical at all. There are tons of experts who deny LW-style AI danger, or say that cryonics is pointless, or that you don’t have to believe many worlds theory to be a competent physicist. So the answer is “those things are so far from SAD that I’d be surprised if there was any way they could be representative.”
I think both of the example that EY gives are cases where he was public about his position before the empirical evidence came in.
EY wrote on facebook about his project to build the mega lamp to help Brienne and was confident enough in it to convince her not to spent the winter outside of the US.
The example with the Japanese fiscal policy is also one where EY was public about his views before the empirical evidence was public.
That doesn’t help because there’s no baseline. How many times did he have public positions that didn’t pan out?
But the point is that “Eliezer knew better than the experts with respect to lamps” doesn’t imply “Eliezer knows better than the experts on typical LW topics about which Eliezer claims to know better than the experts”.
>Suppose it were the case that some cases of Seasonal Affective Disorder proved resistant to sitting in front of a 10,000-lux lightbox for 30 minutes (the standard treatment), but would nonetheless respond if you bought 130 or so 60-watt-equivalent high-CRI LED bulbs, in a mix of 5000K and 2700K color temperatures, and strung them up over your two-bedroom apartment.
This is hindsight bias. Eliezer gives this example because it’s an example which happened to work.
But the relevant question is not “would immodesty, in this cherry-picked case, produce the right result”, but “would immodesty, when applied to many cases whose truth value you don’t know about in advance, produce the right result”. The procedure that has the greatest chance of working overall might fail in this particular case.
There are all sorts of things which can help you in a cherry-picked case subject to hindsight bias and availability bias, which are bad overall. There are automobile accidents where people were saved by not having seatbelts, but it would be dumb to point to one of those and use it as justification for a policy of not wearing a seatbelt.
“Hindsight bias” seems like the wrong term, unless you’re claiming that Eliezer was much less confident beforehand that this experiment would work than he sounds; but the thing you’re saying in the rest of your comment is just that the example is cherry-picked and might be unrepresentative, regardless of how confident Eliezer happened to be that “more light” would work. When Eliezer introduced the Bank of Japan example as well as the SAD example, he explicitly said that both were “cherry-picked,” so I think it’s good that you’re pointing this out in case readers forget.
There are different ways in which the example might be unrepresentative, and if you do think that’s the case, I think it would be helpful to explicitly state how you’d expect it to be unrepresentative. A few examples:
“SAD research is generally on the ball, and this is a weird exception where researchers happened to have a blind spot.”
“Most medical research is dramatically better than research on depressive disorders, so Eliezer got lucky by having a problem that fell in the depression category.”
“Medical research is particularly dysfunctional in ways that make it easy to outperform in this way, but this is an anomaly and isn’t something you can expect to do in areas outside of medical research.”
“Higher-intensity, higher-duration artificial illumination is probably useless for treating nearly all cases of SAD, and this is probably well-known to SAD researchers, but Brienne happened to be suffering from a very atypical case of SAD that does respond well to extra artificial light.”
I’m not saying you need to have a settled view on what the right answer is; I’m just curious very roughly what kinds of explanations you think are relatively likely, versus relatively unlikely.
“Hindsight bias” seems like the wrong term”
Quoting Less Wrong wiki: ” Hindsight bias is a tendency to overestimate the foreseeability of events that have actually happened. I.e., subjects given information about X, and asked to assign a probability that X will happen, assign much lower probabilities than subjects who are given the same information about X, are told that X actually happened, and asked to estimate the foreseeable probability of X. ”
Eliezer claims that he knows better than the experts. The event being foreseen is “my claim to know better than the experts pans out”. He’s pointing to a single instance of that, where it did indeed pan out, and using it to suggest that the event is relatively likely to happen in general. That’s a form of hindsight bias.
“I think it would be helpful to explicitly state how you’d expect it to be unrepresentative. ”
We know that there are areas where Eliezer claims to know better than the experts. We also know that the most prominent ones of those are not medical at all. There are tons of experts who deny LW-style AI danger, or say that cryonics is pointless, or that you don’t have to believe many worlds theory to be a competent physicist. So the answer is “those things are so far from SAD that I’d be surprised if there was any way they could be representative.”
I think both of the example that EY gives are cases where he was public about his position before the empirical evidence came in.
EY wrote on facebook about his project to build the mega lamp to help Brienne and was confident enough in it to convince her not to spent the winter outside of the US.
The example with the Japanese fiscal policy is also one where EY was public about his views before the empirical evidence was public.
That doesn’t help because there’s no baseline. How many times did he have public positions that didn’t pan out?
But the point is that “Eliezer knew better than the experts with respect to lamps” doesn’t imply “Eliezer knows better than the experts on typical LW topics about which Eliezer claims to know better than the experts”.
The key problem is that it worked, but that the knowledge doesn’t spread in an effective way.
There are many things that people discover that work but in our society the knowledge doesn’t spread in a scaleable way.