If developing AGI were an unequivocally good thing, as Eliezer used to think, then I guess he’d be happily developing AGI instead of trying to raise the rationality waterline. I don’t know what Luke would do if there were no existential risks, but I don’t think his current administrative work is very exciting for him. Here’s a list of people who want to save the world and are already changing their life accordingly. Also there have been many LW posts by people who want to choose careers that maximize the probability of saving the world. Judge the proportion of empty talk however you want, but I think there are quite a few fanatics.
If developing AGI were an unequivocally good thing, as Eliezer used to think, then I guess he’d be happily developing AGI instead of trying to raise the rationality waterline. I don’t know what Luke would do if there were no existential risks, but I don’t think his current administrative work is very exciting for him. Here’s a list of people who want to save the world and are already changing their life accordingly. Also there have been many LW posts by people who want to choose careers that maximize the probability of saving the world. Judge the proportion of empty talk however you want, but I think there are quite a few fanatics.
Indeed, Eliezer once told me that he was a lot more gung-ho about saving the world when he thought it just meant building AGI as quickly as possible.