Bachelor of Philosophy, Politics, Economics at the Australian National University.
Related Interests: Irrationality, hypocrisy, contradictions, disassociation, AI.
Hobby Focuses: Evolutionary stability of constitutional monarchies against other government types, Irrationality as human nature, and the good life/equanimity.
Regarding the first point: You merely have to ensure that the population that knows but doesn’t contribute is larger than the combined past populations that have contributed and the expected future populations. An improbable thing to do but still a solution.
Regarding the second point: If the populations requiring punishment are greater than those that would benefit surely such an AI could never reason in a utilitarian manner that it was better to punish the many for the few. Unless as a result of the AI’s actions an individual in the future is consistently always able to experience a higher utility than anyone in the past. So high in fact that it outweighed the collective utility of another person i.e. one persons utility could be greater than two persons collectively. There is no theoretical limit in that sense to the extent that one persons individual utility could outweigh a collective utility given the right circumstances. The AI could act such that the utility of one person was greater than all past and future persons, and as such it was worth sacrificing all past and future persons simply because one person is capable of experiencing greater utility than everyone combined. I struggle to see that individual human experiences could ever be so vastly different regardless of AI interventions. Sure one person who loves ice cream may experience more utility from an ice-cream than two people who hate ice-cream would collectively but could the utility of one person or two or 50 or 50,000, or 50 million ever outweigh all past person’s utility.
I suppose I don’t know because I’m not a super AI. :p
Beyond that I’d have to be convinced further that true, undying AI, truly is the capstone achievement of humanity. I’m sure there is plenty of reasoning for that on these forums though I’m still dubious. A capstone is an ingenuity that cannot be surpassed and I’m sure that at a minimum an AI could point out to us that we’re not done yet, assuming we don’t realize one ourselves.
Thank you for the reply though! Excellent points for me to ponder further.