Excuse me, but this sounds to me like a terrible argument. If the far future goes right, our descendents will despise us as complete ignorant barbarians and won’t give a crap what we did or didn’t do. If it goes wrong (ie: rocks fall, everyone dies), then all those purported descendents aren’t a minus on our humane-ness ledger, they’re a zero: potential people don’t count (since they’re infinite in number and don’t exist, after all).
Besides, I damn well do care how people lived 5000 years ago, and I would certainly hope that my great-to-the-Nth-grandchildren will care how I live today. This should especially matter to someone whose idea of the right future involves being around to meet those descendents, in which case the preservation of lives ought to matter quite a lot.
God knows you have an x-risk fetish, but other than FAI (which carries actual benefits aside from averting highly improbable extinction events) you’ve never actually justified it. There has always been some small risk we could all be wiped out by a random disaster. The world has been overdue for certain natural disasters for millenia now, and we just don’t really have a way to prevent any of them. Space colonization would help, but there are vast and systematic reasons why we can’t do space colonization right now.
Except, of course, the artificial ones: nuclear winter, global warming, blah blah blah. Those, however, like all artificial problems, are deeply tied in with the human systems generating them, and they need much more systematic solutions than “donate to this anti-global-warming charity to meliorate the impact or reduce the risk of climate change killing everyone everywhere”. But rather like the Silicon Valley start-up community, there’s a nasty assumption that problems too large for 9 guys in a basement simply don’t exist.
You seem to suffer a bias where you simply say, “people are fools and the world is insane” and thus write off any notion of doing something about it, modulo your MIRI/CFAR work.
I think future humans are definitely worthy of consideration. Consider placing a time bomb in a childcare centre for 6 year old kids set to go off in 10 years. Even though the children who will be blown up don’t yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.
If you subscribe to the block model of the universe, then time is just another dimension, and future beings exist in the same way that someone in the room over who you can’t see also exists.
Even though the children who will be blown up don’t yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.
Well, it’s definitely a bad thing to do because it kills the children. I dunno if I’d follow that next inference ;-).
If you subscribe to the block model of the universe, then time is just another dimension,
Luckily, I don’t. It works well for general relativity at the large scale, but doesn’t yet some to integrate well with the smallest scales of possible causality at the quantum level. I think that a model which ontically elides the distinction between past, present, and future as “merely epistemic” is quite possibly mistaken and requires additional justification.
I realize this makes me a naive realist about time, but on the other hand, I just don’t see which predictions a “block model” actually makes about causality that account for both the success of general relativity and my very real ability to make interventions such as bombing or not bombing (personally, I’d prefer not bombing, there’s too many damn bombs lately) a day-care. You might say “you’ve already made the choice and carried out the bombing in the future”, but then you have to explain what the fundamental physical units of information are and how they integrate with relativity to form time as we know it in such a way that there can be no counterfactuals, even if only from some privileged informational reference frame.
In fact, the lack of privileged reference frames seems like an immediate issue: how can there be a “god’s eye view” where complete information about past, present, and future exist together without violating relativity by privileging some reference frame? Relativity seems configured to allow loosely-coupled causal systems to “run themselves”, so to speak, in parallel, without some universal simulator needing a global clock, so that synchronization only happens at the speed-of-light causality-propagation rate.
If I’m correctly understanding the subtext of that question (“if it doesn’t affect what you actually do besides talking, it’s meaningless to say you care about it”) then I respectfully disagree.
I am quite happy to say that A cares about B if, e.g., A’s happiness is greatly affected by B. If it happens that A is able to have substantial effect on B, then (1) we may actually be more interested in the question “what if anything does A do about B?”, which could also be expressed as “does A care about B?”, and (2) if the answer is that A doesn’t do anything about B, then we might well doubt A’s claims that her happiness is greatly affected by B. But in cases like this one—where, so far as we know, there is and could be nothing whatever that A can do to affect B—I suggest that “cares about” should be taken to mean something like “has her happiness affected by”, and that asking what A does about B is simply a wrong response.
(Note 1. I am aware that I may be quite wrong about the subtext of the question. If an answer along the lines of “It manifests itself as changes in my emotional state when I discover new things about the lives of people 5000 years ago or when I imagine different ways their lives might have been” would have satisfied you, then the above is aimed not at you but at a hypothetical version of you who meant something else by the question.)
(Note 2. You might say that caring about something you can’t influence is pointless and irrelevant. That might be correct, though I’m not entirely convinced, but in any case “how does that caring manifest itself?” seems like a strange thing to say to make that point.)
Excuse me, but this sounds to me like a terrible argument. If the far future goes right, our descendents will despise us as complete ignorant barbarians and won’t give a crap what we did or didn’t do. If it goes wrong (ie: rocks fall, everyone dies), then all those purported descendents aren’t a minus on our humane-ness ledger, they’re a zero: potential people don’t count (since they’re infinite in number and don’t exist, after all).
Besides, I damn well do care how people lived 5000 years ago, and I would certainly hope that my great-to-the-Nth-grandchildren will care how I live today. This should especially matter to someone whose idea of the right future involves being around to meet those descendents, in which case the preservation of lives ought to matter quite a lot.
God knows you have an x-risk fetish, but other than FAI (which carries actual benefits aside from averting highly improbable extinction events) you’ve never actually justified it. There has always been some small risk we could all be wiped out by a random disaster. The world has been overdue for certain natural disasters for millenia now, and we just don’t really have a way to prevent any of them. Space colonization would help, but there are vast and systematic reasons why we can’t do space colonization right now.
Except, of course, the artificial ones: nuclear winter, global warming, blah blah blah. Those, however, like all artificial problems, are deeply tied in with the human systems generating them, and they need much more systematic solutions than “donate to this anti-global-warming charity to meliorate the impact or reduce the risk of climate change killing everyone everywhere”. But rather like the Silicon Valley start-up community, there’s a nasty assumption that problems too large for 9 guys in a basement simply don’t exist.
You seem to suffer a bias where you simply say, “people are fools and the world is insane” and thus write off any notion of doing something about it, modulo your MIRI/CFAR work.
I think future humans are definitely worthy of consideration. Consider placing a time bomb in a childcare centre for 6 year old kids set to go off in 10 years. Even though the children who will be blown up don’t yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.
If you subscribe to the block model of the universe, then time is just another dimension, and future beings exist in the same way that someone in the room over who you can’t see also exists.
Well, it’s definitely a bad thing to do because it kills the children. I dunno if I’d follow that next inference ;-).
Luckily, I don’t. It works well for general relativity at the large scale, but doesn’t yet some to integrate well with the smallest scales of possible causality at the quantum level. I think that a model which ontically elides the distinction between past, present, and future as “merely epistemic” is quite possibly mistaken and requires additional justification.
I realize this makes me a naive realist about time, but on the other hand, I just don’t see which predictions a “block model” actually makes about causality that account for both the success of general relativity and my very real ability to make interventions such as bombing or not bombing (personally, I’d prefer not bombing, there’s too many damn bombs lately) a day-care. You might say “you’ve already made the choice and carried out the bombing in the future”, but then you have to explain what the fundamental physical units of information are and how they integrate with relativity to form time as we know it in such a way that there can be no counterfactuals, even if only from some privileged informational reference frame.
In fact, the lack of privileged reference frames seems like an immediate issue: how can there be a “god’s eye view” where complete information about past, present, and future exist together without violating relativity by privileging some reference frame? Relativity seems configured to allow loosely-coupled causal systems to “run themselves”, so to speak, in parallel, without some universal simulator needing a global clock, so that synchronization only happens at the speed-of-light causality-propagation rate.
Nick Bostrom has written some essays arguing for the prioritization of existential risk reduction over other causes, e.g. this one and this one.
I agree with your last paragraph.
Do you, now?
And how does that caring manifest itself?
Presumably by staying on the lookout for opportunities to get their hands on a time machine.
Hand me a time machine and you’ll find out!
Go look for blue Public Call Police Boxes :-P
If I’m correctly understanding the subtext of that question (“if it doesn’t affect what you actually do besides talking, it’s meaningless to say you care about it”) then I respectfully disagree.
I am quite happy to say that A cares about B if, e.g., A’s happiness is greatly affected by B. If it happens that A is able to have substantial effect on B, then (1) we may actually be more interested in the question “what if anything does A do about B?”, which could also be expressed as “does A care about B?”, and (2) if the answer is that A doesn’t do anything about B, then we might well doubt A’s claims that her happiness is greatly affected by B. But in cases like this one—where, so far as we know, there is and could be nothing whatever that A can do to affect B—I suggest that “cares about” should be taken to mean something like “has her happiness affected by”, and that asking what A does about B is simply a wrong response.
(Note 1. I am aware that I may be quite wrong about the subtext of the question. If an answer along the lines of “It manifests itself as changes in my emotional state when I discover new things about the lives of people 5000 years ago or when I imagine different ways their lives might have been” would have satisfied you, then the above is aimed not at you but at a hypothetical version of you who meant something else by the question.)
(Note 2. You might say that caring about something you can’t influence is pointless and irrelevant. That might be correct, though I’m not entirely convinced, but in any case “how does that caring manifest itself?” seems like a strange thing to say to make that point.)
I feel guilty for not living in ways that would be approved of by our ancestors.