I think you can steelman Ben Goertzel-style worries about near-term amoral applications of AI being bad “formative influences” on AGI, but mostly under a continuous takeoff model of the world. If AGI is a continuous development of earlier systems, then maybe it shares some datasets and learned models with earlier AI projects, and definitely it shares the broader ecosystems of tools, dataset-gathering methodologies, model-evaluating paradigms, and institutional knowledge on the part of the developers. If the ecosystem in which this thing “grows up” is one that has previously been optimized for marketing, or military applications, or what have you, this is going to have ramifications in how the first AGI projects are designed and what they get exposed to. The more continuous you think the development is going to be, the more this can be intervened on by trying to make sure that AI is pro-social even in the short term.
I think you can steelman Ben Goertzel-style worries about near-term amoral applications of AI being bad “formative influences” on AGI, but mostly under a continuous takeoff model of the world. If AGI is a continuous development of earlier systems, then maybe it shares some datasets and learned models with earlier AI projects, and definitely it shares the broader ecosystems of tools, dataset-gathering methodologies, model-evaluating paradigms, and institutional knowledge on the part of the developers. If the ecosystem in which this thing “grows up” is one that has previously been optimized for marketing, or military applications, or what have you, this is going to have ramifications in how the first AGI projects are designed and what they get exposed to. The more continuous you think the development is going to be, the more this can be intervened on by trying to make sure that AI is pro-social even in the short term.