I have also speculated on the need for a strong exterior threat. The problem is that there isn’t one that wouldn’t either be solved too quickly, or introduce it’s own polarizing problems.
A super villain doesn’t work because they lose too quickly, see Archimedes, Giorgio Rosa, et al.
Berserkers are bad because they either won’t work or work too well. I can’t see any way to make them a long term stable threat without explicitly programming them to lose.
Rogue AI doesn’t work, again because it either self-destructs or kills us too quickly, or possibly sublimes, depending on quality and goal structure.
The best proposal I’ve ever heard is a rival species, something like an Ant the size of a dog, whose lack of individual intelligence was offset by stealth hives, co-op, and physical toughness. But it would be hard to engineer one.
What? I didn’t realize humility had become an objective value that changes the results of your actual actions. Who cares why people save lives? Or how brave they are inside?
Ghandhi built a movement that your anonymous nonviolent protester belonged to, and has inspired millions to be better people. I think that’s a plus, and I don’t really care if someone ‘more deserving’ ‘sacrificed more’ in a greater cause, but to less effect.