Well, but there’s also the issue with sums being at all times partial. The low probability high impact scenarios are inherently problematic because very huge number of such scenarios can be constructed (that’s where their low probability comes from), and ultimately, your action will be dependent not on utility but on which types of scenarios you are more likely to construct or encounter.
There’s also the issue with predictability of the actions. E.g. you can, with a carefully placed flap of butterfly wings, save or kill enormous number of people, but it all balances out—if you are equally able to construct arguments in favour, or against the flap. It’s easy for butterfly but it is not so easy for other actions such as donating. Whereas there are clearly possible scenarios (accidental nuclear nuclear exchange) that you can save yourself from with your fallout shelter, and it does not balance out.
Ultimately, it is all up to ability to predict what happens. You can’t really predict what happens out of giving money for someone to prevent robot apocalypse. Maybe they’ll produce useful insights. Maybe the reason they are so concerned is that their thinking about artificial intelligence is inside a box full of particularly dangerous AIs, and that’s where they do all their research, and this actually increases risk or creates even worse scenarios (AIs that torture everyone). Maybe they are promoting notion of the risk. Maybe Frankenstein and Terminator already saturated that. Maybe they look bad or act annoying (non credentiated people intruding into highly technical fields tend to have such effect, especially on cultures that hold scholarship and testing in high regard—e.g. Asians, former Soviet Union, Europe even) and discredit the concerns, making important research harder to publish. You can’t evaluate all of that, nor can you produce representative and sufficiently large sample of the concerns, so the expected utility is exactly zero (minus the predictable consequences of you having less money to spend on any future deals). The fallout shelter on the other hand is not exactly zero, it may not be the best idea but you have a clear world model of it not cancelling out.
Well, but there’s also the issue with sums being at all times partial. The low probability high impact scenarios are inherently problematic because very huge number of such scenarios can be constructed (that’s where their low probability comes from), and ultimately, your action will be dependent not on utility but on which types of scenarios you are more likely to construct or encounter.
There’s also the issue with predictability of the actions. E.g. you can, with a carefully placed flap of butterfly wings, save or kill enormous number of people, but it all balances out—if you are equally able to construct arguments in favour, or against the flap. It’s easy for butterfly but it is not so easy for other actions such as donating. Whereas there are clearly possible scenarios (accidental nuclear nuclear exchange) that you can save yourself from with your fallout shelter, and it does not balance out.
Ultimately, it is all up to ability to predict what happens. You can’t really predict what happens out of giving money for someone to prevent robot apocalypse. Maybe they’ll produce useful insights. Maybe the reason they are so concerned is that their thinking about artificial intelligence is inside a box full of particularly dangerous AIs, and that’s where they do all their research, and this actually increases risk or creates even worse scenarios (AIs that torture everyone). Maybe they are promoting notion of the risk. Maybe Frankenstein and Terminator already saturated that. Maybe they look bad or act annoying (non credentiated people intruding into highly technical fields tend to have such effect, especially on cultures that hold scholarship and testing in high regard—e.g. Asians, former Soviet Union, Europe even) and discredit the concerns, making important research harder to publish. You can’t evaluate all of that, nor can you produce representative and sufficiently large sample of the concerns, so the expected utility is exactly zero (minus the predictable consequences of you having less money to spend on any future deals). The fallout shelter on the other hand is not exactly zero, it may not be the best idea but you have a clear world model of it not cancelling out.