To the substance: If you’re that serious about utlilitarianism/ doing good, why aren’t you focused on AI alignment? I personally find the argument that we’re sitting at the hinge of history and can influence the entire future of the lightcone, creating an amount of joy that truly boggles the mind if we succeed. This logic would appear to make work toward success on creating aligned and beneficial AGI dwarf all other charitable works, even if the probabilities and timelines are off by several orders of magnitude. The real possibility that this is happening soon and currently hangs on a knifes edge isn’t even necessary to consider.
Perhaps you feel unequipped to help with that project? It would still make sense to devote your efforts to getting equipped.
To the substance: If you’re that serious about utlilitarianism/ doing good, why aren’t you focused on AI alignment? I personally find the argument that we’re sitting at the hinge of history and can influence the entire future of the lightcone, creating an amount of joy that truly boggles the mind if we succeed. This logic would appear to make work toward success on creating aligned and beneficial AGI dwarf all other charitable works, even if the probabilities and timelines are off by several orders of magnitude. The real possibility that this is happening soon and currently hangs on a knifes edge isn’t even necessary to consider.
Perhaps you feel unequipped to help with that project? It would still make sense to devote your efforts to getting equipped.
Yeah I’m not really equipped to do AI alignment and I have lower P doom than others, but I agree it’s important and it’s one of the places I donate.