Another criticism that’s worth mentioning is an observational rather than modeling issue: If AI were a major part of the Great Filter then we’d see it in the sky from when the AGIs started to control space around them at a substantial fraction of c. This should discount the probability of AGIs undergoing intelligence explosions. How much it should do so is a function of how much of the Great Filter one thinks is behind us and how much one thinks is in front of us.
No, that’s backwards. If something takes over space at c, we never see it. The slower it expands, the easier it is to see, so the more our failure to observe it is evidence that it doesn’t exist.
In the hypothetical, it is expanding at a decent fraction of c, not at c. In order for us to see it it needs to expand at a decent fraction of c. For example, suppose it expands at 1 meter/s. That’s fast enough to easily wipe out a planet before you can run away effectively, but how long will it take before it has a noticeable effect on even the nearest star? Well, if the planet is the same distance from the sun as the Earth (8 light minutes), it would take around 8*60*3*10^8 seconds, or around 4000 years. So we’ll notice if we see something odd about just that star. But it won’t ever expand fast enough to reach the next star.
The most easily noticeable things are things that travel at a decent fraction of c, fast enough for us to notice but not fast enough for it to be impossible for us to notice before we get wiped out. AGIs expanding a decent fraction of c would fall into that category. If something does expand at c you are correct that we won’t notice.
We don’t have a good idea how quickly something can expand between stars. The gaps between stars are big and launching things fast is tough. The fastest we’ve ever launched something is Helios) which at maximum velocity was a little over 0.0002c. I agree that 1 m/s would probably be artificial stupid. There’s clearly a sweet range here. If for example, your AI expanded at .01c then it won’t ever reach us in time if it started in another galaxy. Even your example of .1c (which is extremely fast rate of expansion) means that one has to believe that most of the Filtration is not occurring due to AI.
If AI is the general filter and it is expanding at .1c then we need to live in an extremely rare lightcone for not seeing any sign of it. This argument is of course weak (and nearly useless) if one thinks that the vast majority of filtration is behind us. But either way, it strongly suggests that most of the Filter is not fast-expanding AI.
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance?
Not directly relevant, but I’m not sure how you’re using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense—“Where does my reference class end?”—but I don’t think that is usual.
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance?
This would depend on what exactly they are doing to those galaxies. If they are doing stellar engineering (e.g. making Dyson spheres, Matrioshka brains, stellar lifting) then we’d probably notice if it were any nearby galaxy. But conceivably something might try to deliberately hide its activity.
Not directly relevant, but I’m not sure how you’re using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense—“Where does my reference class end?”—but I don’t think that is usual.
Yes, I think I’m using it in some form closer to the second. In the context of the first one, in regards solely to the Fermi problem then AGI is simply not a filter at all which if anything makes the original point stronger.
Another criticism that’s worth mentioning is an observational rather than modeling issue: If AI were a major part of the Great Filter then we’d see it in the sky from when the AGIs started to control space around them at a substantial fraction of c. This should discount the probability of AGIs undergoing intelligence explosions. How much it should do so is a function of how much of the Great Filter one thinks is behind us and how much one thinks is in front of us.
No, that’s backwards. If something takes over space at c, we never see it. The slower it expands, the easier it is to see, so the more our failure to observe it is evidence that it doesn’t exist.
In the hypothetical, it is expanding at a decent fraction of c, not at c. In order for us to see it it needs to expand at a decent fraction of c. For example, suppose it expands at 1 meter/s. That’s fast enough to easily wipe out a planet before you can run away effectively, but how long will it take before it has a noticeable effect on even the nearest star? Well, if the planet is the same distance from the sun as the Earth (8 light minutes), it would take around 8*60*3*10^8 seconds, or around 4000 years. So we’ll notice if we see something odd about just that star. But it won’t ever expand fast enough to reach the next star.
The most easily noticeable things are things that travel at a decent fraction of c, fast enough for us to notice but not fast enough for it to be impossible for us to notice before we get wiped out. AGIs expanding a decent fraction of c would fall into that category. If something does expand at c you are correct that we won’t notice.
Something that expands at a fixed 1 m/s in all three of on a planet, in a solar system, and between stars qualifies as an artificial stupidity.
Something that expands at 0.1 c can be observed, but has heavy anthropic penalty: we should not be surprised not to see it.
We don’t have a good idea how quickly something can expand between stars. The gaps between stars are big and launching things fast is tough. The fastest we’ve ever launched something is Helios) which at maximum velocity was a little over 0.0002c. I agree that 1 m/s would probably be artificial stupid. There’s clearly a sweet range here. If for example, your AI expanded at .01c then it won’t ever reach us in time if it started in another galaxy. Even your example of .1c (which is extremely fast rate of expansion) means that one has to believe that most of the Filtration is not occurring due to AI.
If AI is the general filter and it is expanding at .1c then we need to live in an extremely rare lightcone for not seeing any sign of it. This argument is of course weak (and nearly useless) if one thinks that the vast majority of filtration is behind us. But either way, it strongly suggests that most of the Filter is not fast-expanding AI.
Yes, if things expanding at 0.1c are common, then we should see galaxies containing them, but would we notice them? Would the galaxy look unnatural from this distance?
Not directly relevant, but I’m not sure how you’re using filtration. I use it in a Fermi paradox sense: a filter is something that explains the failure to expand. An expanding filter is thus nonsense. I suppose you could use it in a doomsday argument sense—“Where does my reference class end?”—but I don’t think that is usual.
This would depend on what exactly they are doing to those galaxies. If they are doing stellar engineering (e.g. making Dyson spheres, Matrioshka brains, stellar lifting) then we’d probably notice if it were any nearby galaxy. But conceivably something might try to deliberately hide its activity.
Yes, I think I’m using it in some form closer to the second. In the context of the first one, in regards solely to the Fermi problem then AGI is simply not a filter at all which if anything makes the original point stronger.