Yes there are different ways to conceive what news is good or bad, and yes it is good from a God-view if filters are late. But to those of us who already knew that we exist now, and have passed all previous filters, but don’t know how many others out there are at a similar stage, the news that the biggest filters lie ahead of us is surely discouraging, if useful.
Yes there are different ways to conceive what news is good or bad
My post tried to argue that some of them are better than others.
the news that the biggest filters lie ahead of us is surely discouraging, if useful.
But is such discouragement rational? If not, perhaps we should try to fight against it. It seems to me that we would be less discouraged if we considered our situation and decisions from what you call the God-view.
Call me stuck in ordinary decision theory, with less than universal values. I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us. The fact that I might have wanted to commit before the universe began to rating universes as better if they had later filters is not very relevant for what info I now will call “bad news.”
After I wrote my previous reply to you, I realized that I don’t really know why we call anything good news or bad news, so I may very well be wrong when I claimed that late great filter is not bad news, or that it’s more rational to not call it bad news.
That aside, ordinary decision theory has obvious flaws. (See here for the latest example.) Given human limitations, we may not be able to get ourselves unstuck, but we should at least recognize that it’s less than ideal to be stuck this way.
I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us.
The goal of UDT-style decision theories is making optimal decisions at any time, without needing to precommit in advance. Looking at the situation as if from before the beginning of time is argued to be the correct perspective from any location and state of knowledge, no matter what your values are.
Maybe this is a good time to mention my proposed solution to the Fermi Paradox. It does not invoke a Great Filter. The one-sentence version is that we cannot observe other civilizations if they are expanding with the speed of light.
The main idea is a 0-1 law for the expansion speed of civilizations. I argue that there is only a very short timeframe in the life of a civilization when their sphere of influence is already expanding, but not yet expanding with exactly the speed of light. If they are before this short phase transition, they can’t be observed with current human technology. After the phase transition they can’t be observed at all.
That’s an interesting idea. But it looks to me like you still need to postulate that civilization is very rare, because we are in the light cone of an enormous area.
You are absolutely correct that we need one more postulate: I postulate that expanding civilizations destroy nonexpanding ones on contact. (They turn them into negentropy-source or computronium or whatever.) This suggests that we are in a relatively small part of the space-time continuum unoccupied by expanding civilizations.
But you are right, at this point I already postulated many things, so a straight application of the Anthropic Principle would be a cleaner solution to the Fermi Paradox than my roundabout ways. Honestly, in a longer exposition, like a top-level post, I wouldn’t even have introduced the idea as a solution to the Fermi Paradox. But it does show that our observations can be compatible with the existence of many civilizations.
I believe that the valuable part of my idea is not the (yet another) solution to the Paradox, but the proposed 0-1 law itself. I would be very interested in a discussion about the theoretical feasibility of light-speed expansion. More generally, I am looking for a solution to the following problem: if a one optimizes a light-cone to achieve the biggest computational power possible, what will be the expansion speed of this computer? I am aware that this is not a completely specified problem, but I think it is specified well enough so that we can start thinking about it.
Yes. After I figured this little theory out, I did some googling to find the first inventor. It seemed like such a logical idea, I couldn’t believe that I was the first to reason like this. This googling led me to Hanson’s paper. As you note, this paper has some ideas similar to mine. These ideas are very interesting on their own, but the similarity is superficial, so they do not really help answering any of my questions. This is not surprising, considering that these are physics and computer science rather than economics questions.
Later I found another, more relevant paper:
Thermodynamic cost of reversible computing
Not incidentally, this was written by Tomasso Toffoli, who coined the term ‘computronium’. But it still doesn’t answer my questions.
This is not surprising, considering that these are physics and computer science rather than economics questions.
Hanson’s paper is most useful for answering the question, ‘if civilizations could expand at light-speed, would they?’ There’s 2 pieces to the puzzle, the ability to do so and the willingness to do so.
As for the ability: are you not satisfied by general considerations of von Neumann probes and starwisps? Those aren’t going to get a civilization expanding at 0.9999c, say, but an average of 0.8 or 0.9 c would be enough, I’d think, for your theory.
Those are unsupported arguments—speculation without basis in fact.
There are other satisfying resolutions to the Fermi Paradox—such as the idea that we are locally first. DOOM mongers citing Fermi for support should attempt to refute such arguments if they want to make a serious case that the Fermi paradox provides much evidence to support to their position.
Anyway, I don’t see how these count as evidence that the “biggest” filters lie in the future. The Fermi paradox just tells us most planets don’t make it to a galactic civilisation quickly. There’s no implication about when they get stuck.
Yes there are different ways to conceive what news is good or bad, and yes it is good from a God-view if filters are late. But to those of us who already knew that we exist now, and have passed all previous filters, but don’t know how many others out there are at a similar stage, the news that the biggest filters lie ahead of us is surely discouraging, if useful.
My post tried to argue that some of them are better than others.
But is such discouragement rational? If not, perhaps we should try to fight against it. It seems to me that we would be less discouraged if we considered our situation and decisions from what you call the God-view.
Call me stuck in ordinary decision theory, with less than universal values. I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us. The fact that I might have wanted to commit before the universe began to rating universes as better if they had later filters is not very relevant for what info I now will call “bad news.”
After I wrote my previous reply to you, I realized that I don’t really know why we call anything good news or bad news, so I may very well be wrong when I claimed that late great filter is not bad news, or that it’s more rational to not call it bad news.
That aside, ordinary decision theory has obvious flaws. (See here for the latest example.) Given human limitations, we may not be able to get ourselves unstuck, but we should at least recognize that it’s less than ideal to be stuck this way.
The goal of UDT-style decision theories is making optimal decisions at any time, without needing to precommit in advance. Looking at the situation as if from before the beginning of time is argued to be the correct perspective from any location and state of knowledge, no matter what your values are.
It might be interesting—if it was true.
I don’t think anyone has so far attempted to make the case that the “biggest” filters lie in the future.
I think many of the most common solutions to the Fermi Paradox are exactly that, making the case that there is a Great Filter ahead.
Maybe this is a good time to mention my proposed solution to the Fermi Paradox. It does not invoke a Great Filter. The one-sentence version is that we cannot observe other civilizations if they are expanding with the speed of light.
The main idea is a 0-1 law for the expansion speed of civilizations. I argue that there is only a very short timeframe in the life of a civilization when their sphere of influence is already expanding, but not yet expanding with exactly the speed of light. If they are before this short phase transition, they can’t be observed with current human technology. After the phase transition they can’t be observed at all.
That’s an interesting idea. But it looks to me like you still need to postulate that civilization is very rare, because we are in the light cone of an enormous area.
You are absolutely correct that we need one more postulate: I postulate that expanding civilizations destroy nonexpanding ones on contact. (They turn them into negentropy-source or computronium or whatever.) This suggests that we are in a relatively small part of the space-time continuum unoccupied by expanding civilizations.
But you are right, at this point I already postulated many things, so a straight application of the Anthropic Principle would be a cleaner solution to the Fermi Paradox than my roundabout ways. Honestly, in a longer exposition, like a top-level post, I wouldn’t even have introduced the idea as a solution to the Fermi Paradox. But it does show that our observations can be compatible with the existence of many civilizations.
I believe that the valuable part of my idea is not the (yet another) solution to the Paradox, but the proposed 0-1 law itself. I would be very interested in a discussion about the theoretical feasibility of light-speed expansion. More generally, I am looking for a solution to the following problem: if a one optimizes a light-cone to achieve the biggest computational power possible, what will be the expansion speed of this computer? I am aware that this is not a completely specified problem, but I think it is specified well enough so that we can start thinking about it.
Have you looked at Hanson’s ‘burning the cosmic commons’ paper?
Yes. After I figured this little theory out, I did some googling to find the first inventor. It seemed like such a logical idea, I couldn’t believe that I was the first to reason like this. This googling led me to Hanson’s paper. As you note, this paper has some ideas similar to mine. These ideas are very interesting on their own, but the similarity is superficial, so they do not really help answering any of my questions. This is not surprising, considering that these are physics and computer science rather than economics questions.
Later I found another, more relevant paper: Thermodynamic cost of reversible computing Not incidentally, this was written by Tomasso Toffoli, who coined the term ‘computronium’. But it still doesn’t answer my questions.
Hanson’s paper is most useful for answering the question, ‘if civilizations could expand at light-speed, would they?’ There’s 2 pieces to the puzzle, the ability to do so and the willingness to do so.
As for the ability: are you not satisfied by general considerations of von Neumann probes and starwisps? Those aren’t going to get a civilization expanding at 0.9999c, say, but an average of 0.8 or 0.9 c would be enough, I’d think, for your theory.
Those are unsupported arguments—speculation without basis in fact.
There are other satisfying resolutions to the Fermi Paradox—such as the idea that we are locally first. DOOM mongers citing Fermi for support should attempt to refute such arguments if they want to make a serious case that the Fermi paradox provides much evidence to support to their position.
Anyway, I don’t see how these count as evidence that the “biggest” filters lie in the future. The Fermi paradox just tells us most planets don’t make it to a galactic civilisation quickly. There’s no implication about when they get stuck.