they would then only need a slight preponderance of virtue over vice
This assumes that morality has only one axis, which I find highly unlikely. I would expect the seed to quickly radicalize, becoming good in ways that the seed likes, and becoming evil in ways that the seed likes. Under this model, if given a random axis, the seed comes up good 51% of the time, I would expect the aligned AI to remain 51% good.
Assuming the axes do interact, if they do so inconveniently, for instance if we posit that evil has higher evolutionary fitness, or that self destruction becomes trivially easy at high levels, an error along any one axis could break the entire system.
Also, if I do grant this, then I would expect the reverse to also be true.
One might therefore wish to only share code for the ethical part of the AI
This assumes you can discern the ethical part and that the ethical part is separate from the intelligent part.
Even given that, I still expect massive resources to be diverted from morality towards intelligence, A: because people want power and people with secrets are stronger than those without and B: because people don’t trust black boxes, and will want to know what’s inside before it kills them.
Thence, civilization would just reinvent the same secrets over and over again, and then the time limit runs out.
Given: Closed source Artificial General Intelligence requires all involved parties to have no irreconcilable differences.
Thence: The winner of a closed source race will inevitably be the party with the highest homogeneity times intelligence.
Thence: Namely, the CCP.
Given: Alignment is trivial.
Thence: The resulting AI will be evil.
Given: Alignment is difficult.
Given: It’s not in the CCP’s character to care.
Thence: Alignment will fail.
Based on my model of reality, closed sourcing AI research approaches the most wrong and suicidal decisions possible (if you’re not the CCP). Closed groups fracture easily. Secrets breed distrust, which in turn breeds greater secrecy and smaller shards. The solution to the inevitable high conflict environment is a single party wielding overwhelming power.
Peace, freedom- civilization starts with trust. Simply building aligned AI is insufficient. Who it is aligned with is absolutely critical.
Given this, to create civilized AI, the creator must create in a civilized manner.