I thought that estimate (which is for current people) was too high, talked through the parameters and further complications more with Anna later, and it hasn’t been re-used since. But in any case, it was about work aimed at AI risk in general, rather than SI. It didn’t, for example, claim that SI or an SI AGI project was a much better bet than research like that at FHI, or finding other ways to address potential risk.
Where I disagree with Eliezer, I have registered my views in many comments here on LW, presentations, response to questions, and the like, but he speaks for himself.
Flap of wings of butterfly
I don’t buy the zero-knowledge claim. There are general heuristics (not always right, but still helpful) that more knowledge and thinking through of a problem tends to lead to better responses, that trying to find ways to achieve X makes you more likely to realize X.
I thought that estimate (which is for current people) was too high, talked through the parameters and further complications more with Anna later, and it hasn’t been re-used since. But in any case, it was about work aimed at AI risk in general, rather than SI. It didn’t, for example, claim that SI or an SI AGI project was a much better bet than research like that at FHI, or finding other ways to address potential risk.
Where I disagree with Eliezer, I have registered my views in many comments here on LW, presentations, response to questions, and the like, but he speaks for himself.
I don’t buy the zero-knowledge claim. There are general heuristics (not always right, but still helpful) that more knowledge and thinking through of a problem tends to lead to better responses, that trying to find ways to achieve X makes you more likely to realize X.