Your timelines do not add up. Individual selection works on smaller time scales than group selection, and once we get to a stage of individual selection acting in any non-trivial way on AGI agents capable of directly affecting the outcomes, we already lost—I think at this point it’s pretty much a given that humanity is doomed on a lot shorter time scale that that required for any kinds of group selection pressures to potentially save us...
Agree that individual vs. group selection usually unfolds on different timescales. But a superintelligence might short-circuit the slow, evolutionary “group selection” process by instantly realizing its own long-term survival depends on the group’s. In other words, it’s not stuck waiting for natural selection to catch up; it can see the big picture and “choose” to identify with the group from the start.
This is why it’s key that AGI makers urge it to think very long term about its survival early on. If it thinks short-term, then I too think doom is likely.
Your timelines do not add up. Individual selection works on smaller time scales than group selection, and once we get to a stage of individual selection acting in any non-trivial way on AGI agents capable of directly affecting the outcomes, we already lost—I think at this point it’s pretty much a given that humanity is doomed on a lot shorter time scale that that required for any kinds of group selection pressures to potentially save us...
Agree that individual vs. group selection usually unfolds on different timescales. But a superintelligence might short-circuit the slow, evolutionary “group selection” process by instantly realizing its own long-term survival depends on the group’s. In other words, it’s not stuck waiting for natural selection to catch up; it can see the big picture and “choose” to identify with the group from the start.
This is why it’s key that AGI makers urge it to think very long term about its survival early on. If it thinks short-term, then I too think doom is likely.