Okay, but the reason you think AI safety/x-risk is important is because twenty years ago, people like Eliezer Yudkowsky and Nick Bostrom were trying to do systematically correct reasoning about the future, noticed that the alignment problem looked really important, and followed that line of reasoning where it took them—even though it probably looked “tainted” to the serious academics of the time. (The robot apocalypse is nigh? Pftt, sounds like science fiction.)
Those subjects were always obviously potentially important, so I don’t see this as evidence against a policy of picking one’s battles by only arguing for unpopular truths that are obviously potentially important.
Hm, touché. Although … if “the community” were actually following a policy of strategically arguing for things based on importance-times-neglectedness, I would expect to see a lot more people working on eugenics, which looks really obviously potentially important to me, either on a Christiano-esque Outside View (smarter humans means relatively more human optimization power steering the future rather than unalignable machine-learning algorithms), or a hard-takeoff view (smarter humans sooner means more time to raise alignment-researcher tykebombs). Does that seem right or wrong to you? (Feel free to email or PM me.)
I was thinking that reputation-hit contributes to neglectedness. Maybe what we really need is a way to reduce reputational “splash damage”, so that people with different levels of reputation risk-tolerance can work together or at least talk to each other (using, for example, a website).
Those subjects were always obviously potentially important, so I don’t see this as evidence against a policy of picking one’s battles by only arguing for unpopular truths that are obviously potentially important.
Hm, touché. Although … if “the community” were actually following a policy of strategically arguing for things based on importance-times-neglectedness, I would expect to see a lot more people working on eugenics, which looks really obviously potentially important to me, either on a Christiano-esque Outside View (smarter humans means relatively more human optimization power steering the future rather than unalignable machine-learning algorithms), or a hard-takeoff view (smarter humans sooner means more time to raise alignment-researcher tykebombs). Does that seem right or wrong to you? (Feel free to email or PM me.)
Importance * Neglectedness—Reputation hit might be more accurate.
I was thinking that reputation-hit contributes to neglectedness. Maybe what we really need is a way to reduce reputational “splash damage”, so that people with different levels of reputation risk-tolerance can work together or at least talk to each other (using, for example, a website).