Can someone explain to me why we don’t see people with differing complex views on something placing bets in a similar fashion more often?
Verden
Karma: 95
[Question] Why don’t you introduce really impressive people you personally know to AI alignment (more often)?
Would it be helpful to think about something like “what Brier score will a person in the reference class of “people-similar-to-Eliezer_2022-in-all-relevant-ways” have after making a bunch of predictions on Metaculus?” Perhaps we should set up this sort of question on Metaculus or Manifold? Though I would probably refrain from explicitly mentioning Eliezer in it.
Has anyone looked into how it compares to stannous fluoride?
It’s funny that this post has probably made me feel more doomy about AI risk than any other LW post published this year. Perhaps for no particularly good reason. There’s just something really disturbing to me about seeing a vivid case where folks like Jacob, Eli and Samotsvety, apparently along with many others, predict a tiny chance that a certain thing in AI progress will happen (by a certain time), and then it just… happens.