solution 2 implies that a smart person with a strong technical background would go on to work on important problems (by default) which is not necessarily universally true and it’s IMO likely that many such people would be working on less important things than what their social circle is otherwise steering them to work on
The claim is not that either “solution” is sufficient for counterfactuality, it’s that either solution can overcome the main bottleneck to counterfactuality. After that, per Amdahl’s Law, there will still be other (weaker) bottlenecks to overcome, including e.g. keeping oneself focused on something important.
I don’t think the social thing ranks above “be able to think useful important thoughts at all”. (But maybe otherwise agree with the rest of your model as an important thing to think about)
[edit: hrm, “for smart people with a strong technical background” might be doing most of the work here”]
it’s IMO likely that many such people would be working on less important things than what their social circle is otherwise steering them to work on
Why do you think this? When I try to think of concrete examples here, its all confounded by the relevant smart people having social circles not working on useful problems.
I also think that 2 becomes more true once the relevant smart person already wants to solve alignment, or otherwise is already barking up the right tree.
solution 2 implies that a smart person with a strong technical background would go on to work on important problems (by default) which is not necessarily universally true and it’s IMO likely that many such people would be working on less important things than what their social circle is otherwise steering them to work on
The claim is not that either “solution” is sufficient for counterfactuality, it’s that either solution can overcome the main bottleneck to counterfactuality. After that, per Amdahl’s Law, there will still be other (weaker) bottlenecks to overcome, including e.g. keeping oneself focused on something important.
I don’t think the social thing ranks above “be able to think useful important thoughts at all”. (But maybe otherwise agree with the rest of your model as an important thing to think about)
[edit: hrm, “for smart people with a strong technical background” might be doing most of the work here”]
Plausibly going off into the woods decreases the median output while increasing the variance.
Why do you think this? When I try to think of concrete examples here, its all confounded by the relevant smart people having social circles not working on useful problems.
I also think that 2 becomes more true once the relevant smart person already wants to solve alignment, or otherwise is already barking up the right tree.
One need not go off into the woods indefinitely, though.
I don’t think I implied that John’s post implied that and I don’t think going into the woods non-indefinitely mitigates the thing I pointed out.