ETA: The same comment quoted a passage from Eliezer saying that he considered and rejected “IA first” which probably also directly influenced many people who deferred to him on AI x-risk strategy.
To be fair, it was a fairly tepid rejection, mainly saying “FAI is good too”, but yeah I was surprised to see that.
(I think at the time (2011-2021), if asked, I would have said that IA is not my comparative advantage compared to FAI. This was actually mistaken, but only because almost no one was working seriously on IA. I would have read that Yudkowsky paper, but I definitely don’t recall that passage, and generally had the impression that Yudkowsky’s position was “HIA is good, I just happen to be working on FAI”.)
...Ok now that I think about it, I’m just now recalling several conversations in the past few years, where I’m like “we should have talent / funding for HIA” and the other person is like “well shouldn’t MIRI do that? aren’t they working on that?” and I’m like “what? no? why do you think that?”—which suggests an alternative cause for people not working on HIA (namely, that false impression).
To be fair, it was a fairly tepid rejection, mainly saying “FAI is good too”, but yeah I was surprised to see that.
(I think at the time (2011-2021), if asked, I would have said that IA is not my comparative advantage compared to FAI. This was actually mistaken, but only because almost no one was working seriously on IA. I would have read that Yudkowsky paper, but I definitely don’t recall that passage, and generally had the impression that Yudkowsky’s position was “HIA is good, I just happen to be working on FAI”.)
...Ok now that I think about it, I’m just now recalling several conversations in the past few years, where I’m like “we should have talent / funding for HIA” and the other person is like “well shouldn’t MIRI do that? aren’t they working on that?” and I’m like “what? no? why do you think that?”—which suggests an alternative cause for people not working on HIA (namely, that false impression).