What happened to you seems pretty representative to me of how a lot of the most promising people who showed up in the last 5 years started working on AI alignment. So it’s not obvious to me that recommending others to read the same things you did is completely infeasible or the wrong thing to do.
In general it strikes me as more promising to encourage someone to read HPMOR and then the Sequences, than to get them to read a single article directly on AI and from there try to get them interested in working on AI alignment things. The content of the Sequences strikes me as a more important thing to know to talk sensibly about AI alignment than the object-level problem, and I have a general sense that fields of inquiry are more defined by a shared methodology than a shared object-level problem. Which makes me hesitant to promote AI risk to people whose methodology I expect to fail to make any progress on the problem (and instead would first focus on showing them a methodology that might allow them to actually make some progress on it).
I agree that HPMOR may be the best way to get someone to want read the initially opaque-seeming Sequences: “what if my thought processes were as clear as Rational!Harry’s?”. But the issue then becomes how to send a credible signal about why HPMOR is more than a fun read for those who have less to do, especially for individuals who don’t already read regularly (which was me at the time; luckily, I have a slightly addictive personality and got sucked in).
My little brother will be entering college soon, so I gave him the gift I wish I had received at that age: a set of custom-printed HPMOR tomes. I think this is a stronger signal, but it‘s too costly (and probably strange) to do this for people with whom we aren’t as close.
Not sure I agree. HPMOR is cool, but also a turn-off for many. I’d just mention that I work on AI alignment, and when pressed for details, refer them to Superintelligence.
Nitpick: I’m not yet working on alignment. Also, if someone had given me Superintelligence a year ago, I probably would have fixated on all the words I didn’t know instead of taking the problem seriously. They might become aware of the problem, and maybe even work on it—but as habryka pointed out, they wouldn’t be using rationalist methodology.
Edit: a lot of the value of reading Superintelligence came from having to seriously consider the problem for an extended period of time. I had already read CEV, WaitButWhy, IntelligenceExplosion, and various LW posts about malevolent genies, but it still hadn’t reached the level of “I, personally, want and need to take serious action on this”. I find it hard to imagine that someone could simply pick up Superintelligence and skip right to this state of mind, but maybe I’m generalizing too much from my situation.
What happened to you seems pretty representative to me of how a lot of the most promising people who showed up in the last 5 years started working on AI alignment. So it’s not obvious to me that recommending others to read the same things you did is completely infeasible or the wrong thing to do.
In general it strikes me as more promising to encourage someone to read HPMOR and then the Sequences, than to get them to read a single article directly on AI and from there try to get them interested in working on AI alignment things. The content of the Sequences strikes me as a more important thing to know to talk sensibly about AI alignment than the object-level problem, and I have a general sense that fields of inquiry are more defined by a shared methodology than a shared object-level problem. Which makes me hesitant to promote AI risk to people whose methodology I expect to fail to make any progress on the problem (and instead would first focus on showing them a methodology that might allow them to actually make some progress on it).
I agree that HPMOR may be the best way to get someone to want read the initially opaque-seeming Sequences: “what if my thought processes were as clear as Rational!Harry’s?”. But the issue then becomes how to send a credible signal about why HPMOR is more than a fun read for those who have less to do, especially for individuals who don’t already read regularly (which was me at the time; luckily, I have a slightly addictive personality and got sucked in).
My little brother will be entering college soon, so I gave him the gift I wish I had received at that age: a set of custom-printed HPMOR tomes. I think this is a stronger signal, but it‘s too costly (and probably strange) to do this for people with whom we aren’t as close.
Not sure I agree. HPMOR is cool, but also a turn-off for many. I’d just mention that I work on AI alignment, and when pressed for details, refer them to Superintelligence.
Nitpick: I’m not yet working on alignment. Also, if someone had given me Superintelligence a year ago, I probably would have fixated on all the words I didn’t know instead of taking the problem seriously. They might become aware of the problem, and maybe even work on it—but as habryka pointed out, they wouldn’t be using rationalist methodology.
Edit: a lot of the value of reading Superintelligence came from having to seriously consider the problem for an extended period of time. I had already read CEV, WaitButWhy, IntelligenceExplosion, and various LW posts about malevolent genies, but it still hadn’t reached the level of “I, personally, want and need to take serious action on this”. I find it hard to imagine that someone could simply pick up Superintelligence and skip right to this state of mind, but maybe I’m generalizing too much from my situation.