saying that people shouldn’t be concerned with existential risk in the future as communities today are being affected—and that I should not have done this research
Sorry to hear this. As someone who works in societal harms of AI, I would disagree with this view in the quote. My disagreement is common in my circle, and this view in the quote is uncommon. It is interesting/I can empathize because I usually hear this the other way around (AI X risk people telling others that societal harms should not be considered).
But I also believe there should be no claim on [one is “bigger” than the other one] in the situation of saving lives (edited to clarify). (This might be an unpopular opinion and contradicts with cause prioritization, which I am personally not a believer in when working with causes that is related to saving people.) On societal harms for example, PII, deep fakes, child sexual exploitation, self harm, subtle bias and toxicity are real. Both societal harms and long term risks (which maps to me as agent safety) are both important, and both need resources to work on. This view is again common in my circle. Furthermore, many research methods, safety-focused mindset, and policy are actually shared (may be more than people think) between the two camps.[1]
(Edit and gentle call out generally based on my observation: while downvote on disagreeing is perfectly reasonable if one disagrees, the downvote on overall karma when disagreeing seems to be inconsistent with that lesswrong’s values and what it stands for; Suppressing professional dissent and different opinions might be dangerous.)
But I also believe there should be no claim on [one is “bigger” than the other one].
No, We are in triage every second of every day. Which problems are the most important and urgent to address is the primary question that matters for allocating resources. Trying to discourage comparative claims is discouraging the central cognitive activity that altruistically minded people need to engage in if they want to effectively achieve good.
It would make sense in capability cases. But unfortunately, in a lot of live saving cases, all are important (this gets a bit more into other things so let me focus on only the following two points for now). 1. Many causes are not actually comparable in general cause prioritization context (one of which is people may inherent personal biases based their experience and worlds, second is it is hard to value 10 kids’ lives in US vs 10 kids’ lives in Canada, for example), and 2. Time is critical when thinking of lives. You can think of this as emergency rooms.
They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets
re: no claim on which is bigger—when trying to guarantee less than epsilon probability of anything in a category happening, I actually agree with this. The reason I don’t agree now is that we’re really, really far away from “guarantee less than epsilon of anything in this category”. In other words, it only seems plausible to get to ignore relative magnitudes when the probability of anything-in-a-bad-category-happening is “so low our representation can’t usefully compare the probabilities of different bad things”. you’d only end up in a place like that after already having solved strong cosmopolitan alignment.
I also think there are things where you need to keep up pressure against the things in order to prevent collapse of a complex interaction process, eg if there’s any spot on your body which is not defended by the immune system it’s close to just as bad as if any other spot goes undefended—because of the way that an undefended region would allow pathogens to gain a foothold. it’s not literally equivalent, different infections can in fact be quite different, but I think a similar intuition is justified that there has to be some minimum level of defense against some kinds of bad things—this is more like an argument for the structure of comparisons, though, and doesn’t let you go full knightian.
The second argument is where I think we are with things like this. If we, humanity, don’t on net have at least x amount of pressure against all bad things, then we end up losing things we’re trying to save on the way—such as, eg, a cosmopolitan moral society to uplift sooner or later with cosmopolitan moral AI. bluntly, if a group is exterminated, you (probably*) can’t bring them back with superintelligence.
(*unless it turns out resurrection-with-math is possible, I’ve had arguments about this, basically it seems like a very bad bet to me, I’d much prefer people just not die.)
Sorry to hear this. As someone who works in societal harms of AI, I would disagree with this view in the quote. My disagreement is common in my circle, and this view in the quote is uncommon. It is interesting/I can empathize because I usually hear this the other way around (AI X risk people telling others that societal harms should not be considered).
But I also believe there should be no claim on [one is “bigger” than the other one] in the situation of saving lives (edited to clarify). (This might be an unpopular opinion and contradicts with cause prioritization, which I am personally not a believer in when working with causes that is related to saving people.) On societal harms for example, PII, deep fakes, child sexual exploitation, self harm, subtle bias and toxicity are real. Both societal harms and long term risks (which maps to me as agent safety) are both important, and both need resources to work on. This view is again common in my circle. Furthermore, many research methods, safety-focused mindset, and policy are actually shared (may be more than people think) between the two camps.[1]
(Edit and gentle call out generally based on my observation: while downvote on disagreeing is perfectly reasonable if one disagrees, the downvote on overall karma when disagreeing seems to be inconsistent with that lesswrong’s values and what it stands for; Suppressing professional dissent and different opinions might be dangerous.)
No, We are in triage every second of every day. Which problems are the most important and urgent to address is the primary question that matters for allocating resources. Trying to discourage comparative claims is discouraging the central cognitive activity that altruistically minded people need to engage in if they want to effectively achieve good.
It would make sense in capability cases. But unfortunately, in a lot of live saving cases, all are important (this gets a bit more into other things so let me focus on only the following two points for now). 1. Many causes are not actually comparable in general cause prioritization context (one of which is people may inherent personal biases based their experience and worlds, second is it is hard to value 10 kids’ lives in US vs 10 kids’ lives in Canada, for example), and 2. Time is critical when thinking of lives. You can think of this as emergency rooms.
https://forum.effectivealtruism.org/posts/s3N8PjvBYrwWAk9ds/a-perspective-on-the-danger-hypocrisy-in-prioritizing-one
The link above illustrates an example of when time is important.
They’re just saying the fate of the entire world for the entire future is more important than the fate of some people now. This seems pretty hard to argue against. If you only care about people now and somehow don’t care about future people I guess you could get there, but that just doesn’t make sense to me.
Time is probably pretty important to whether we get alignment right and therefore survive to have a long future. It’s pretty tough to argue that there’s definitely plenty of time for alignment. If you do some very rough order of magnitude math, you’re going to have a very difficult time unless you round some factors to zero that really shouldn’t be. The many many future generations involved are going to outweigh impacts on the current generation even if those impacts on future generations are small in expectation.
This is counterintuitive I realize, but the math and logic indicates that everyone should be prioritizing getting AI right. I think that’s just correct, even though it sounds strange or wrong.
Would recommend checking out the link I posted from the EA forum to see why AI X risk may not get to some population and they die before then; and the proposal I have to work on both precisely avoids caring only for subsets
re: no claim on which is bigger—when trying to guarantee less than epsilon probability of anything in a category happening, I actually agree with this. The reason I don’t agree now is that we’re really, really far away from “guarantee less than epsilon of anything in this category”. In other words, it only seems plausible to get to ignore relative magnitudes when the probability of anything-in-a-bad-category-happening is “so low our representation can’t usefully compare the probabilities of different bad things”. you’d only end up in a place like that after already having solved strong cosmopolitan alignment.
I also think there are things where you need to keep up pressure against the things in order to prevent collapse of a complex interaction process, eg if there’s any spot on your body which is not defended by the immune system it’s close to just as bad as if any other spot goes undefended—because of the way that an undefended region would allow pathogens to gain a foothold. it’s not literally equivalent, different infections can in fact be quite different, but I think a similar intuition is justified that there has to be some minimum level of defense against some kinds of bad things—this is more like an argument for the structure of comparisons, though, and doesn’t let you go full knightian.
The second argument is where I think we are with things like this. If we, humanity, don’t on net have at least x amount of pressure against all bad things, then we end up losing things we’re trying to save on the way—such as, eg, a cosmopolitan moral society to uplift sooner or later with cosmopolitan moral AI. bluntly, if a group is exterminated, you (probably*) can’t bring them back with superintelligence.
(*unless it turns out resurrection-with-math is possible, I’ve had arguments about this, basically it seems like a very bad bet to me, I’d much prefer people just not die.)