I know this sort of idea is inspiring to a lot of you, and I’m not sure I should rain on the parade… but I’m also not sure that everybody who thinks the way I do should have to feel like they’re reading it alone.
To me this reads like “Two Clippies Collide”. In the end, the whole negotiated collaboration is still just going to keep expanding purely for the sake of expansion.
I would rather watch the unlifted stars.
I suppose I’m lucky I don’t buy into the acausal stuff at all, or it’d feel even worse.
I’m also not sure that they wouldn’t have solved everything even they thought was worth solving long before even getting out of their home star systems, so I’m not sure I buy either the cultural exchange or the need to beam software around. The Universe just isn’t necessarily that complicated.
I didn’t think the implication was necessarily that they planned to disassemble every solar system and turn it into probe factories. It’s more like… seeing a vast empty desert and deciding to build cities in it. A huge universe, barren of life except for one tiny solar system, seems not depressing exactly but wasteful. I love nature and I would never want all the Earth’s wilderness to be paved over. But at the same time I think a lot of the best the world has to offer is people, and if we kept 99.9% of it as a nature preserve then almost nobody would be around to see it. You’d rather watch the unlifted stars, but to do that you have to exist.
No, the probes are instrumental and are actually a “cost of doing business”. But, as I understand it, the orthodox plan is to get as close as possible to disassembling every solar system and turning it into computronium to run the maximum possible number of “minds”. The minds are assumed to experience qualia, and presumably you try to make the qualia positive. Anyway, a joule not used for computation is a joule wasted.
That’s like saying that because we live in a capitalist society, the default plan is to destroy every bit of the environment and fill every inch of the world with high rise housing projects. It’s… true in some sense, but only as a hypothetical extreme, a sort of economic spherical cow. In reality, people and societies are more complicated and less single minded than that, and also people just mostly don’t want that kind of wholesale destruction.
I had read it, had forgotten about it, hadn’t connected it with this story… but didn’t need to.
This story makes the goal clear enough. As I see it, eating the entire Universe to get the maximal number of mind-seconds[1]is expanding just to expand. It’s, well, gauche.
Really, truly, it’s not that I don’t understand the Grand Vision. It never has been that I didn’t understand the Grand Vision. It’s that I don’t like the Grand Vision.
It’s OK to be finite. It’s OK to not even be maximal. You’re not the property of some game theory theorem, and it’s OK to not have a utility function.
It’s also OK to die (which is good because it will happen). Doesn’t mean you have to do it at any particular time.
The problem with not expanding is that you can be pretty sure someone else will then grab what you didn’t and may use it for something that you hate. (Unless you trust that they’ll use it well.)
eating the entire Universe to get the maximal number of mind-seconds is expanding just to expand
It’s not “just to expand”. Expansion, at least in the story, is instrumental to whatever the content of these mind-seconds is.
I already have people planning to grab everything and use it for something that I hate, remember? Or at least for something fairly distasteful.
Anyway, if that were the problem, one could, in theory, go out and grab just enough to be able to shut down anybody who tried to actually maximize. Which gives us another armchair solution to the Fermi paradox: instead of grabby aliens, we’re dealing with tasteful aliens who’ve set traps to stop anybody who tries to go nuts expansion-wise.
It’s not “just to expand”. Expansion, at least in the story, is instrumental to whatever the content of these mind-seconds is.
Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel. Then it’s just expanding to have more of the same thing that you already have, which is more or less identical from where I sit to expanding just to expand.
And I don’t feel bound to account for the “preferences” of nonexistent beings.
Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel.
Somehow people keep finding meaning in failling in love and starting a family, even when billions of people have already done that before. We also find meaning in doing careers that are very similar to what million of people have done before or traveling to destination that has been visited by millions of turist. The more similar an activity is to something our ancestors did, the more meaningful it seems.
From the outside, all this looks grabby, but from the inside it feels meaningful.
You can choose or not choose to create more “minds”. If you create them, they will exist and have experiences. If you don’t create them, then they won’t exist and won’t have experiences.
That means that you’re free to not create them based on an “outside” view. You don’t have to think about the “inside” experiences of the minds you don’t create, because those experiences don’t and will never exist. That’s still true even on a timeless view; they never exist at any time or place. And it includes not having to worry about whether or not they would, if they existed, find anything meaningful[1].
If you do choose to create them, then of course you have to be concerned with their inner experiences. But those experiences only matter because they actually exist.
I truly don’t understand why people use that word in this context or exactly what it’s supposed to, um, mean. But pick pretty much any answer and it’s still true.
My point is that potential parents often care about non-existing people: their potential kids. And once they bring these potential kids into existence, those kids might start caring about a next generation.
Simularly, some people/minds will want to expand because that is what their company does, or they would like the experience of exploring a new planet/solar system/galaxy or would like the status of being the first to settle there.
If it’s OK to be not maximal, it will be reflected in The Grand Vision. But if we stay not maximal, it means that immeasurable amount of wonders is doing to not exist because of whatever limited vision you like. This is unfair.
I think it’s a good model in that it shows the timescales and the levels of resources that such a civilization would likely have access to, based on current (last 10 or so years) understanding of the universe.
Beaming software is also a way to save mass. It would also be a way to have “tourists”.
Details like the process for negotiating with aliens, assuming you even can build black holes or use them that way, etc are obviously highly unlikely to be correct.
I know this sort of idea is inspiring to a lot of you, and I’m not sure I should rain on the parade… but I’m also not sure that everybody who thinks the way I do should have to feel like they’re reading it alone.
To me this reads like “Two Clippies Collide”. In the end, the whole negotiated collaboration is still just going to keep expanding purely for the sake of expansion.
I would rather watch the unlifted stars.
I suppose I’m lucky I don’t buy into the acausal stuff at all, or it’d feel even worse.
I’m also not sure that they wouldn’t have solved everything even they thought was worth solving long before even getting out of their home star systems, so I’m not sure I buy either the cultural exchange or the need to beam software around. The Universe just isn’t necessarily that complicated.
I didn’t think the implication was necessarily that they planned to disassemble every solar system and turn it into probe factories. It’s more like… seeing a vast empty desert and deciding to build cities in it. A huge universe, barren of life except for one tiny solar system, seems not depressing exactly but wasteful. I love nature and I would never want all the Earth’s wilderness to be paved over. But at the same time I think a lot of the best the world has to offer is people, and if we kept 99.9% of it as a nature preserve then almost nobody would be around to see it. You’d rather watch the unlifted stars, but to do that you have to exist.
No, the probes are instrumental and are actually a “cost of doing business”. But, as I understand it, the orthodox plan is to get as close as possible to disassembling every solar system and turning it into computronium to run the maximum possible number of “minds”. The minds are assumed to experience qualia, and presumably you try to make the qualia positive. Anyway, a joule not used for computation is a joule wasted.
That’s like saying that because we live in a capitalist society, the default plan is to destroy every bit of the environment and fill every inch of the world with high rise housing projects. It’s… true in some sense, but only as a hypothetical extreme, a sort of economic spherical cow. In reality, people and societies are more complicated and less single minded than that, and also people just mostly don’t want that kind of wholesale destruction.
Presumably you’ve also read The ants and the grasshopper and [Knowing it’s connected is a mild spoiler]? I think of those as companion pieces to this, which is only giving you a part of the story and not really conveying what all the resources were “for.”
I had read it, had forgotten about it, hadn’t connected it with this story… but didn’t need to.
This story makes the goal clear enough. As I see it, eating the entire Universe to get the maximal number of mind-seconds[1] is expanding just to expand. It’s, well, gauche.
Really, truly, it’s not that I don’t understand the Grand Vision. It never has been that I didn’t understand the Grand Vision. It’s that I don’t like the Grand Vision.
It’s OK to be finite. It’s OK to not even be maximal. You’re not the property of some game theory theorem, and it’s OK to not have a utility function.
It’s also OK to die (which is good because it will happen). Doesn’t mean you have to do it at any particular time.
Appropriately weighted if you like. And assuming you can define what counts as a “mind”.
I thought it was pretty courageous of you to state this so frankly here, especially given how the disagree-votes turned out.
The problem with not expanding is that you can be pretty sure someone else will then grab what you didn’t and may use it for something that you hate. (Unless you trust that they’ll use it well.)
It’s not “just to expand”. Expansion, at least in the story, is instrumental to whatever the content of these mind-seconds is.
I already have people planning to grab everything and use it for something that I hate, remember? Or at least for something fairly distasteful.
Anyway, if that were the problem, one could, in theory, go out and grab just enough to be able to shut down anybody who tried to actually maximize. Which gives us another armchair solution to the Fermi paradox: instead of grabby aliens, we’re dealing with tasteful aliens who’ve set traps to stop anybody who tries to go nuts expansion-wise.
Beyond a certain point, I doubt that the content of the additional minds will be interestingly novel. Then it’s just expanding to have more of the same thing that you already have, which is more or less identical from where I sit to expanding just to expand.
And I don’t feel bound to account for the “preferences” of nonexistent beings.
Somehow people keep finding meaning in failling in love and starting a family, even when billions of people have already done that before. We also find meaning in doing careers that are very similar to what million of people have done before or traveling to destination that has been visited by millions of turist. The more similar an activity is to something our ancestors did, the more meaningful it seems.
From the outside, all this looks grabby, but from the inside it feels meaningful.
… but a person who doesn’t exist doesn’t have an “inside”.
Which non-existing person are you refering to?
You can choose or not choose to create more “minds”. If you create them, they will exist and have experiences. If you don’t create them, then they won’t exist and won’t have experiences.
That means that you’re free to not create them based on an “outside” view. You don’t have to think about the “inside” experiences of the minds you don’t create, because those experiences don’t and will never exist. That’s still true even on a timeless view; they never exist at any time or place. And it includes not having to worry about whether or not they would, if they existed, find anything meaningful[1].
If you do choose to create them, then of course you have to be concerned with their inner experiences. But those experiences only matter because they actually exist.
I truly don’t understand why people use that word in this context or exactly what it’s supposed to, um, mean. But pick pretty much any answer and it’s still true.
My point is that potential parents often care about non-existing people: their potential kids. And once they bring these potential kids into existence, those kids might start caring about a next generation. Simularly, some people/minds will want to expand because that is what their company does, or they would like the experience of exploring a new planet/solar system/galaxy or would like the status of being the first to settle there.
If it’s OK to be not maximal, it will be reflected in The Grand Vision. But if we stay not maximal, it means that immeasurable amount of wonders is doing to not exist because of whatever limited vision you like. This is unfair.
I think it’s a good model in that it shows the timescales and the levels of resources that such a civilization would likely have access to, based on current (last 10 or so years) understanding of the universe.
Beaming software is also a way to save mass. It would also be a way to have “tourists”.
Details like the process for negotiating with aliens, assuming you even can build black holes or use them that way, etc are obviously highly unlikely to be correct.