Duplicates—digital copies as opposed to genetic clones—might not require new training (unless a whole/partial restart/retraining was being done).
When combined with self-modification, there could be ‘evolution’ without ‘deaths’ of ‘individuals’ - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)
Duplicates—digital copies as opposed to genetic clones—might not require new training (unless a whole/partial restart/retraining was being done).
Wouldn’t new training be strongly adaptive—if not strictly required—if the duplicate’s environment is substantively different from the environment of its parent?
When combined with self-modification, there could be ‘evolution’ without ‘deaths’ of ‘individuals’ - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)
I understand this model; at the same time, however, it’s my impression that it’s commonplace in software development to periodically altogether jettison an old legacy software system in favor of building a new system from the ground-up. This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.
This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.
It’s an unusual case, but AlphaGo provides an example of something being removed and retrained and getting better.
Outside of that—perhaps. The viability of self-modifying software...I guess we’ll see. For a more intuitive approach, let’s imagine an AGI is a human emulation except it’s immortal/doesn’t die of old age. (I.e. maybe the ‘software’ in some sense doesn’t change but the knowledge continues to accumulate and be integrated in a mind.)
1. Why would such an AI have ‘children’?
2. How long do software systems last when compared to people?
Just reasoning by analogy, yes ‘mentoring’ makes sense, though maybe in a different form. One person teaching everyone else in the world sounds ridiculous—with AGI, it seems conceivable. Or in a different direction, imagine if when you forgot about something you just asked your past self.
Overall, I’d say it’s not an necessary thing, but for agents like us it seems useful, and so the scenario you describe seems probable, but not guaranteed.
Duplicates—digital copies as opposed to genetic clones—might not require new training (unless a whole/partial restart/retraining was being done).
When combined with self-modification, there could be ‘evolution’ without ‘deaths’ of ‘individuals’ - just continual ship of Theseus processes. (Perhaps stuff like merging as well, which is more complicated.)
Wouldn’t new training be strongly adaptive—if not strictly required—if the duplicate’s environment is substantively different from the environment of its parent?
I understand this model; at the same time, however, it’s my impression that it’s commonplace in software development to periodically altogether jettison an old legacy software system in favor of building a new system from the ground-up. This seemed to be evidence that there are growing costs to continual self-modification in software systems that might limit this strategy.
It’s an unusual case, but AlphaGo provides an example of something being removed and retrained and getting better.
Outside of that—perhaps. The viability of self-modifying software...I guess we’ll see. For a more intuitive approach, let’s imagine an AGI is a human emulation except it’s immortal/doesn’t die of old age. (I.e. maybe the ‘software’ in some sense doesn’t change but the knowledge continues to accumulate and be integrated in a mind.)
1. Why would such an AI have ‘children’?
2. How long do software systems last when compared to people?
Just reasoning by analogy, yes ‘mentoring’ makes sense, though maybe in a different form. One person teaching everyone else in the world sounds ridiculous—with AGI, it seems conceivable. Or in a different direction, imagine if when you forgot about something you just asked your past self.
Overall, I’d say it’s not an necessary thing, but for agents like us it seems useful, and so the scenario you describe seems probable, but not guaranteed.