I was informed by Justin Shovelain that recently he independently circulated a document arguing for “IA first”, and that most of the two dozen people he showed it to agreed with it, or nearly so.
I did not pursue the HIA first argument myself much after that, as it didn’t seem to be my comparative advantage at the time, and it seemed like @JustinShovelain’s efforts was picking up steam. I’m not sure what happened afterwards, but it would be rather surprising if it didn’t have something to do with Eliezer’s insistence and optimism on directly building FAI at the time (which is largely incompatible with “IA first”), but I don’t have any direct evidence of this. I wasn’t in any physical rationalists communities, and don’t recall any online discussions of Justin’s document after this.
ETA: The same comment quoted a passage from Eliezer saying that he considered and rejected “IA first” which probably also directly influenced many people who deferred to him on AI x-risk strategy.
ETA: The same comment quoted a passage from Eliezer saying that he considered and rejected “IA first” which probably also directly influenced many people who deferred to him on AI x-risk strategy.
To be fair, it was a fairly tepid rejection, mainly saying “FAI is good too”, but yeah I was surprised to see that.
(I think at the time (2011-2021), if asked, I would have said that IA is not my comparative advantage compared to FAI. This was actually mistaken, but only because almost no one was working seriously on IA. I would have read that Yudkowsky paper, but I definitely don’t recall that passage, and generally had the impression that Yudkowsky’s position was “HIA is good, I just happen to be working on FAI”.)
...Ok now that I think about it, I’m just now recalling several conversations in the past few years, where I’m like “we should have talent / funding for HIA” and the other person is like “well shouldn’t MIRI do that? aren’t they working on that?” and I’m like “what? no? why do you think that?”—which suggests an alternative cause for people not working on HIA (namely, that false impression).
Under Some Thoughts on Singularity Strategies (the first link in my OP), I commented:
I did not pursue the HIA first argument myself much after that, as it didn’t seem to be my comparative advantage at the time, and it seemed like @JustinShovelain’s efforts was picking up steam. I’m not sure what happened afterwards, but it would be rather surprising if it didn’t have something to do with Eliezer’s insistence and optimism on directly building FAI at the time (which is largely incompatible with “IA first”), but I don’t have any direct evidence of this. I wasn’t in any physical rationalists communities, and don’t recall any online discussions of Justin’s document after this.
ETA: The same comment quoted a passage from Eliezer saying that he considered and rejected “IA first” which probably also directly influenced many people who deferred to him on AI x-risk strategy.
To be fair, it was a fairly tepid rejection, mainly saying “FAI is good too”, but yeah I was surprised to see that.
(I think at the time (2011-2021), if asked, I would have said that IA is not my comparative advantage compared to FAI. This was actually mistaken, but only because almost no one was working seriously on IA. I would have read that Yudkowsky paper, but I definitely don’t recall that passage, and generally had the impression that Yudkowsky’s position was “HIA is good, I just happen to be working on FAI”.)
...Ok now that I think about it, I’m just now recalling several conversations in the past few years, where I’m like “we should have talent / funding for HIA” and the other person is like “well shouldn’t MIRI do that? aren’t they working on that?” and I’m like “what? no? why do you think that?”—which suggests an alternative cause for people not working on HIA (namely, that false impression).