Okay, I may turn this into a top-level post, but more thoughts here for now?
I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I’ll end up when I’m done processing all of it, but alas, I can’t just skip to the “done” part.
...
I’ve hardened myself into the sort of person who is willing to turn away people who need help. In the past, I’ve helped people and been burned by it badly enough that it’s clear I need to defend my own personal boundaries and ensure it doesn’t happen again.
I also help manage many resources now that need to be triaged, and I’ve had to turn away people who are perfectly good people, who wouldn’t take advantage of me, because I think the world needs those resources for something else. Many times, the resources I’m managing (such as, say, newcomer access to LW, or to some meetups I’ve run), are something that feels like it should be something community-like that doesn’t turn people away.
Often, the people I’m turning away really won’t find another place that’ll be as good a home for them as LW. But, the reason LW is a good place is specifically because of gatekeeping. I’ve felt many similar things about the Berkeley community, which is extra complicated because it’s actually multiple overlapping communities with different needs/goals and porous boundaries.
I’m bitter and sad about it. But, also, have grieved it enough to make do.
When I see new young naive ants freely giving, because they’ve never been burned and haven’t yet come to terms with their beckoning responsibilities, I feel a whiff of jealously, but, at this point, mostly a cynical “oh you sweet summer child” feeling.
...
A second tier of confusion/frustration is about “when do we actually get to cash in our victory points and do nice things?”
A significant update for me, when chatting with @Zvi awhile ago, was the note that a nation like the US might have the choice between distributing money more equitably, or having a slightly higher percent GDP growth per year. And it may look like we have so much money, and so many people who could use help. The future is here, it’s just not evenly distributed yet.
But, Zvi pointed out (I think, this was awhile ago), if the US had done that 100 years ago, their growth would have been more similar to Mexico, and then today the US would be significantly less wealthy. Would I trade that way for somewhat-more-equitably-distributed money in the past? Would I make the equivalent trade for the future?
And that kicked me pretty hard in the moral-theory. Compound interest is really good.
It left me with a nagging sense that… surely at some point we’d just have so much stuff that we’d get to just spend it on nice things instead of investing it in the future?
It seems like the answer is a weird mix of “Well, in the near future… generating more wealth comes alongside providing lots of object-level-good-stuff. Billions are being lifted out of poverty, and along the way lots of people are making cool art, having fun, loving each other. The mechanism of the compound interest yields utility. That utility could be locally be distributed more fairly or evenly, maybe, but it’s not like the process of generating Even More Utility Tomorrow isn’t producing genuine nice things.”
But, also, maybe:
“On a cosmic scale, maybe it turns out that the people who concede most to moloch end up winning the universe”
Or, somehow more horrifying:
“Maybe it actually is wasteful and wrong, by my current extrapolated values, to spend our post-singularity victory points on living lavish rich lives in the solar system, rather than saving our energy for winter.” Something something, computronium will run more efficiently when the universe is colder (I vaguely recall hearing an argument about that). Will the platonic spirit of goodness begrudge me/us saving a solar system or galaxy for inefficient biological humans to live out parochial lives? In the end of days, when fades at last the last-lit sun, will trillions of poor but efficient beings curse my name and say “man we could have utilitized that energy so much better than those guys? Why were they so selfish?”
The figure-ground inversion of “Do I identify more with the grasshopper or the ant?” is disorienting.
...
I don’t like living exponentially.
I wanna live in a simple little village, making small-scale projects and feeling good about it.
A lot of rationalists are pretty excited to have galaxy-sized brains doing amazing galaxy sized things at galaxy-brained speeds. I feel a grudging “eh, I guess, if that’s what my friends end up doing?”. I come along into the glorious transhuman future kinda grudgingly. (As I hang out with people who orient their lives more around the GTF, I slowly self-modify into someone who’s a bit more excited about it, and I don’t resist that transition, but I don’t hurry it along)
For now, the notion of having to grow exponentially and move faster and faster feels horrifying. I wanna stay here and smell the roses.
I like playing Village-Building videogames for the first 1-3 phases, when things are slow and simple. I don’t like the latter phases of those games where you’re managing vast civilizational industries.
...
Sometimes, I’ve dwelt upon the dream of “someday the singularity will be here, and instead of feeling an obligation to help steer the world through the narrow needle of fate, I can chillax and do whatever nice things I want.”
And then I reread Meditations on Moloch, and look around at the world around me and think about some of the things Robin Hanson is on about, and imagine multipolar futures wondering:
“What if… the precariousness of human value never grows up into something strong and resilient? What if we pass the singularity but there are just always forces threatening to snuff out human value, forcing it to self-modify into monotonous colonizers?” This fear sometimes manifests as “what if I never get to rest?”, which is fairly silly. I think the parts of humanity that’d need defending in Multipolar Hellworld don’t especially need help from a Raemon-descended being. By that point it’d be cheap to engineer AIs optimized for doing the defending. The parts of me I care about are probably either dead, or getting to live out whatever future me thinks of as living the good life.
But, still, what if things are precarious forever? Maybe we send out colonizers to try and secure the Long Future but those colonizers drift, lightspeed delays + very fast civilizations make longterm alignment impossible and endless wars are happening.
...
All I want is to enjoy summer for awhile before winter comes.
A thing that I found reassuring was realizing that, while I think the longterm future will put all kinds of crazy pressures on humanity to evolve into something weird and alien… the human soul that I want to get a chance to flourish doesn’t feel a need for billions or even millions of years to do so. I feel like the parochial humanity that I want to get to see utopia with only really needs, like, I dunno a few hundred thousand years of getting to live out parochial human utopia together before we’re like “okay, that was cool. What next?”
But I’m not even sure what any of this means.
As I said at the beginning, I have a rough sense of where this moral tradeoff grappling is all going, but I dunno, I’m stuck here at the moment, not ready to give up on grieving it yet.
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there’s always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants’ planning is for nought and the grasshopper actually has the right idea. It doesn’t seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can’t be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.
I remember reading something about the Great Leap Forward in China (it may have been the Cultural Revolution, but I think it was the Great Leap Forward) where some communist party official recognised that the policy had killed a lot of people and ruined the lives of nearly an entire generation, but they argued it was still a net good because it would enrich future generations of people in China.
For individuals you weigh up the risk/rewards of differing your resource for the future. But, as a society asking individuals to give up a lot of potential utility for unborn future generations is a harder sell. It requires coercion.
The math doesn’t necessarily work out that way. If you value the good stuff linearly, the optimal course of action will either be to spend all your resources right away (because the high discount rate makes the future too risky) or to save everything for later (because you can get such a high return on investment that spending any now would be wasteful). Even in a more realistic case where utility is logarithmic with, for example, computation, anticipation of much higher efficiency in the far future could lead to the optimal choice being to use essentially the bare minimum right now.
I think there are reasonable arguments for putting some resources toward a good life in the present, but they mostly involve not being able to realistically pull off total self-deprivation for an extended period of time. So finding the right balance is difficult, because our thinking is naturally biased to want to enjoy ourselves right now. How do you “cancel out” this bias while still accounting for the limits of your ability to maintain motivation? Seems like a tall order to achieve just by introspection.
Ooh! I don’t know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)
I’m surprised this quote is not more common around here, in discussions of turning far-mode values into near-mode actions, with the accompanying denial that the long run is strictly the sum of short runs.
More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.
The mechanism of the compound interest yields utility.
Depends on what you mean by “utility.” If “happiness” the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox. Good overview of a recent paper on it, presented by its author. Full paper here. Good discussion of the paper on the EA forum here (responses from author as well Michael Plant in the comments).
Okay, I may turn this into a top-level post, but more thoughts here for now?
I feel a lot of latent grief and frustration and also resignation about a lot of tradeoffs presented in the story. I have some sense of where I’ll end up when I’m done processing all of it, but alas, I can’t just skip to the “done” part.
...
I’ve hardened myself into the sort of person who is willing to turn away people who need help. In the past, I’ve helped people and been burned by it badly enough that it’s clear I need to defend my own personal boundaries and ensure it doesn’t happen again.
I also help manage many resources now that need to be triaged, and I’ve had to turn away people who are perfectly good people, who wouldn’t take advantage of me, because I think the world needs those resources for something else. Many times, the resources I’m managing (such as, say, newcomer access to LW, or to some meetups I’ve run), are something that feels like it should be something community-like that doesn’t turn people away.
Often, the people I’m turning away really won’t find another place that’ll be as good a home for them as LW. But, the reason LW is a good place is specifically because of gatekeeping. I’ve felt many similar things about the Berkeley community, which is extra complicated because it’s actually multiple overlapping communities with different needs/goals and porous boundaries.
I’m bitter and sad about it. But, also, have grieved it enough to make do.
When I see new young naive ants freely giving, because they’ve never been burned and haven’t yet come to terms with their beckoning responsibilities, I feel a whiff of jealously, but, at this point, mostly a cynical “oh you sweet summer child” feeling.
...
A second tier of confusion/frustration is about “when do we actually get to cash in our victory points and do nice things?”
A significant update for me, when chatting with @Zvi awhile ago, was the note that a nation like the US might have the choice between distributing money more equitably, or having a slightly higher percent GDP growth per year. And it may look like we have so much money, and so many people who could use help. The future is here, it’s just not evenly distributed yet.
But, Zvi pointed out (I think, this was awhile ago), if the US had done that 100 years ago, their growth would have been more similar to Mexico, and then today the US would be significantly less wealthy. Would I trade that way for somewhat-more-equitably-distributed money in the past? Would I make the equivalent trade for the future?
And that kicked me pretty hard in the moral-theory. Compound interest is really good.
It left me with a nagging sense that… surely at some point we’d just have so much stuff that we’d get to just spend it on nice things instead of investing it in the future?
It seems like the answer is a weird mix of “Well, in the near future… generating more wealth comes alongside providing lots of object-level-good-stuff. Billions are being lifted out of poverty, and along the way lots of people are making cool art, having fun, loving each other. The mechanism of the compound interest yields utility. That utility could be locally be distributed more fairly or evenly, maybe, but it’s not like the process of generating Even More Utility Tomorrow isn’t producing genuine nice things.”
But, also, maybe:
“On a cosmic scale, maybe it turns out that the people who concede most to moloch end up winning the universe”
Or, somehow more horrifying:
“Maybe it actually is wasteful and wrong, by my current extrapolated values, to spend our post-singularity victory points on living lavish rich lives in the solar system, rather than saving our energy for winter.” Something something, computronium will run more efficiently when the universe is colder (I vaguely recall hearing an argument about that). Will the platonic spirit of goodness begrudge me/us saving a solar system or galaxy for inefficient biological humans to live out parochial lives? In the end of days, when fades at last the last-lit sun, will trillions of poor but efficient beings curse my name and say “man we could have utilitized that energy so much better than those guys? Why were they so selfish?”
The figure-ground inversion of “Do I identify more with the grasshopper or the ant?” is disorienting.
...
I don’t like living exponentially.
I wanna live in a simple little village, making small-scale projects and feeling good about it.
A lot of rationalists are pretty excited to have galaxy-sized brains doing amazing galaxy sized things at galaxy-brained speeds. I feel a grudging “eh, I guess, if that’s what my friends end up doing?”. I come along into the glorious transhuman future kinda grudgingly. (As I hang out with people who orient their lives more around the GTF, I slowly self-modify into someone who’s a bit more excited about it, and I don’t resist that transition, but I don’t hurry it along)
For now, the notion of having to grow exponentially and move faster and faster feels horrifying. I wanna stay here and smell the roses.
I like playing Village-Building videogames for the first 1-3 phases, when things are slow and simple. I don’t like the latter phases of those games where you’re managing vast civilizational industries.
...
Sometimes, I’ve dwelt upon the dream of “someday the singularity will be here, and instead of feeling an obligation to help steer the world through the narrow needle of fate, I can chillax and do whatever nice things I want.”
And then I reread Meditations on Moloch, and look around at the world around me and think about some of the things Robin Hanson is on about, and imagine multipolar futures wondering:
“What if… the precariousness of human value never grows up into something strong and resilient? What if we pass the singularity but there are just always forces threatening to snuff out human value, forcing it to self-modify into monotonous colonizers?” This fear sometimes manifests as “what if I never get to rest?”, which is fairly silly. I think the parts of humanity that’d need defending in Multipolar Hellworld don’t especially need help from a Raemon-descended being. By that point it’d be cheap to engineer AIs optimized for doing the defending. The parts of me I care about are probably either dead, or getting to live out whatever future me thinks of as living the good life.
But, still, what if things are precarious forever? Maybe we send out colonizers to try and secure the Long Future but those colonizers drift, lightspeed delays + very fast civilizations make longterm alignment impossible and endless wars are happening.
...
All I want is to enjoy summer for awhile before winter comes.
A thing that I found reassuring was realizing that, while I think the longterm future will put all kinds of crazy pressures on humanity to evolve into something weird and alien… the human soul that I want to get a chance to flourish doesn’t feel a need for billions or even millions of years to do so. I feel like the parochial humanity that I want to get to see utopia with only really needs, like, I dunno a few hundred thousand years of getting to live out parochial human utopia together before we’re like “okay, that was cool. What next?”
But I’m not even sure what any of this means.
As I said at the beginning, I have a rough sense of where this moral tradeoff grappling is all going, but I dunno, I’m stuck here at the moment, not ready to give up on grieving it yet.
It seems to me that the optimal schedule by which to use up your slack / resources is based on risk. When planning for the future, there’s always the possibility that some unknown unknown interferes. When maximizing the total Intrinsically Good Stuff you get to do, you have to take into account timelines where all the ants’ planning is for nought and the grasshopper actually has the right idea. It doesn’t seem right to ever have zero credence of this (as that means being totally certain that the project of saving up resources for cosmic winter will go perfectly smoothly, and we can’t be certain of something that will literally take trillions of years), therefore it is actually optimal to always put some of your resources into living for right now, proportional to that uncertainty about the success of the project.
I remember reading something about the Great Leap Forward in China (it may have been the Cultural Revolution, but I think it was the Great Leap Forward) where some communist party official recognised that the policy had killed a lot of people and ruined the lives of nearly an entire generation, but they argued it was still a net good because it would enrich future generations of people in China.
For individuals you weigh up the risk/rewards of differing your resource for the future. But, as a society asking individuals to give up a lot of potential utility for unborn future generations is a harder sell. It requires coercion.
The math doesn’t necessarily work out that way. If you value the good stuff linearly, the optimal course of action will either be to spend all your resources right away (because the high discount rate makes the future too risky) or to save everything for later (because you can get such a high return on investment that spending any now would be wasteful). Even in a more realistic case where utility is logarithmic with, for example, computation, anticipation of much higher efficiency in the far future could lead to the optimal choice being to use essentially the bare minimum right now.
I think there are reasonable arguments for putting some resources toward a good life in the present, but they mostly involve not being able to realistically pull off total self-deprivation for an extended period of time. So finding the right balance is difficult, because our thinking is naturally biased to want to enjoy ourselves right now. How do you “cancel out” this bias while still accounting for the limits of your ability to maintain motivation? Seems like a tall order to achieve just by introspection.
Exactly this. This is the relationship in RL between the discount factor and the probability of transitioning into an absorbing state (death)
Ooh! I don’t know much about the theory of reinforcement learning, could you explain that more / point me to references? (Also, this feels like it relates to the real reason for the time-value of money: money you supposedly will get in the future always has a less than 100% chance of actually reaching you, and is thus less valuable than money you have now.)
I’m surprised this quote is not more common around here, in discussions of turning far-mode values into near-mode actions, with the accompanying denial that the long run is strictly the sum of short runs.
Depends on what you mean by “utility.” If “happiness” the evidence is very much unclear: though Life Satisfaction (LS) is correlated with income/GDP when we make cross-sectional measurement, LS is not correlated with income/GDP when we make time-series measurements. This is the Easterlin Paradox. Good overview of a recent paper on it, presented by its author. Full paper here. Good discussion of the paper on the EA forum here (responses from author as well Michael Plant in the comments).
I’m reminded of The Last Paperclip