They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets
They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets