I agree in general with this point, but in this context we have a problem even if the scaling is something extremely favorable like ~ log log N instead of ~ N, at least under the assumption that the universe is infinite.
The problem is really that we don’t know N and it could be arbitrarily large in an infinite universe, so while I agree linear scaling is too pessimistic of an assumption and the right scaling is probably more like a power law with smaller exponent, I don’t see how any scaling that’s not bounded from above in N is going to get around the anthropic problems.
I’m just generally confused by anthropics and I made this post to get the opinions of other people on whether this is actually a problem or not.
Continuing this thread because I had a thought that seems interesting: Robin Hanson’s grabby aliens model actually makes predictions about how useful serial compute is versus parallel compute.
Specifically, if intelligent life evolves in N hard steps, then the probability of intelligent life evolving in time T on a given planet scales as T^N when T is small. So doubling the available time scales the probability by 2^N. Hanson’s point estimate of N is around 10, based on the evidence I discuss in the body of the question and some other sources of evidence he considers.
Furthermore, Hanson’s ~ 1 Gly estimate of distance between grabby alien origins also suggests that even in the presence of “quiet aliens”, an upper bound on the typical volume you need to get a grabby civilization origin in 10 Gly is around 1 Gly^3, which is a volume that contains around 10^20 planets in orbit around stars. So life probably had a 10^(-20) chance of evolving in ~ 10 Gly, which means the chances go to roughly even odds once we scale the time up to ~ 1000 Gly. Therefore this whole story might only contribute a factor of 100 to the total compute we need, which is relatively insignificant. The scaling with N of the correction we get from anthropics is around ~ 20/N in OOM space (after dropping some terms scaling with log N): if N = 1 we get 20 OOM of correction in serial compute, if N = 2 we get 10 OOM, et cetera.
So at least on this model my power law intuition was correct and the exponent of the power law is the number of hard steps we need in evolution. If N is big, the anthropic shadow is actually quite small in serial compute space.
I agree in general with this point, but in this context we have a problem even if the scaling is something extremely favorable like ~ log log N instead of ~ N, at least under the assumption that the universe is infinite.
The problem is really that we don’t know N and it could be arbitrarily large in an infinite universe, so while I agree linear scaling is too pessimistic of an assumption and the right scaling is probably more like a power law with smaller exponent, I don’t see how any scaling that’s not bounded from above in N is going to get around the anthropic problems.
I’m just generally confused by anthropics and I made this post to get the opinions of other people on whether this is actually a problem or not.
Continuing this thread because I had a thought that seems interesting: Robin Hanson’s grabby aliens model actually makes predictions about how useful serial compute is versus parallel compute.
Specifically, if intelligent life evolves in N hard steps, then the probability of intelligent life evolving in time T on a given planet scales as T^N when T is small. So doubling the available time scales the probability by 2^N. Hanson’s point estimate of N is around 10, based on the evidence I discuss in the body of the question and some other sources of evidence he considers.
Furthermore, Hanson’s ~ 1 Gly estimate of distance between grabby alien origins also suggests that even in the presence of “quiet aliens”, an upper bound on the typical volume you need to get a grabby civilization origin in 10 Gly is around 1 Gly^3, which is a volume that contains around 10^20 planets in orbit around stars. So life probably had a 10^(-20) chance of evolving in ~ 10 Gly, which means the chances go to roughly even odds once we scale the time up to ~ 1000 Gly. Therefore this whole story might only contribute a factor of 100 to the total compute we need, which is relatively insignificant. The scaling with N of the correction we get from anthropics is around ~ 20/N in OOM space (after dropping some terms scaling with log N): if N = 1 we get 20 OOM of correction in serial compute, if N = 2 we get 10 OOM, et cetera.
So at least on this model my power law intuition was correct and the exponent of the power law is the number of hard steps we need in evolution. If N is big, the anthropic shadow is actually quite small in serial compute space.