It would make sense in capability cases. But unfortunately, in a lot of live saving cases, all are important (this gets a bit more into other things so let me focus on only the following two points for now). 1. Many causes are not actually comparable in general cause prioritization context (one of which is people may inherent personal biases based their experience and worlds, second is it is hard to value 10 kids’ lives in US vs 10 kids’ lives in Canada, for example), and 2. Time is critical when thinking of lives. You can think of this as emergency rooms.
They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets
It would make sense in capability cases. But unfortunately, in a lot of live saving cases, all are important (this gets a bit more into other things so let me focus on only the following two points for now). 1. Many causes are not actually comparable in general cause prioritization context (one of which is people may inherent personal biases based their experience and worlds, second is it is hard to value 10 kids’ lives in US vs 10 kids’ lives in Canada, for example), and 2. Time is critical when thinking of lives. You can think of this as emergency rooms.
https://forum.effectivealtruism.org/posts/s3N8PjvBYrwWAk9ds/a-perspective-on-the-danger-hypocrisy-in-prioritizing-one
The link above illustrates an example of when time is important.
They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets