Even leaving aside the matters of ‘permission’ (which lead into awkward matters of informed consent) as well as the difficulties of defining concepts like ‘people’ and ‘property’, define ‘do things to X’. Every action affects others. If you so much as speak a word, you’re causing others to undergo the experience of hearing that word spoken. For an AGI, even thinking draws a miniscule amount of electricity from the power grid, which has near-negligible but quantifiable effects on the power industry which will affect humans in any number of different ways. If you take chaos theory seriously, you could take this even further. It may seem obvious to a human that there’s a vast difference between innocuous actions like those in the above examples and those that are potentially harmful, but lots of things are intuitively obvious to humans and yet turn out to be extremely difficult to precisely quantify, and this seems like just such a case.
DefectiveAlgorithm
I have no idea what ‘there is an objective morality’ would mean, empirically speaking.
More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we’d like.
Ok, I understand it in that context, as there are actual consequences. Of course, this also makes the answer trivial: Of course it’s relevant, it gives you advantages you wouldn’t otherwise have. Though even in the sense you’ve described, I’m not sure whether the word ‘morality’ really seems applicable. If torturing people let us levitate, would we call that ‘objective morality’?
EDIT: To be clear, my intent isn’t to nitpick. I’m simply saying that patterns of behavior being encoded, detected and rewarded by the laws of physics doesn’t obviously seem to equate those patterns with ‘morality’ in any sense of the word that I’m familiar with.
What would an AI that ‘cares’ in the sense you spoke of be able to do to address this problem that a non-‘caring’ one wouldn’t?
Leaving aside other matters, what does it matter if an FAI ‘cares’ in the sense that humans do so long as its actions bring about high utility from a human perspective?
After reading this, I became incapable of giving finite time estimates for anything. :/
...Has someone been mass downvoting you?
No. Clippy cannot be persuaded away from paperclipping because maximizing paperclips is its only terminal goal.
The primary issue? No matter how many times I read your post, I still don’t know what your claim actually is.
This is (one of the reasons) why I’m not a total utilitarian (of any brand). For future versions of myself, my preferences align pretty well with average utilitarianism (albeit with some caveats), but I haven’t yet found or devised a formalization which captures the complexities of my moral intuitions when applied to others.
Approximately the same extent to which I’d consider myself to exist in the event of any other form of information-theoretic death. Like, say, getting repeatedly shot in the head with a high powered rifle, or having my brain dissolved in acid.
Because I terminally value the uniqueness of my identity.
If L-zombies have conscious experience (even when not being ‘run’), does the concept even mean anything? Is there any difference, even in principle, between such an L-zombie and a ‘real’ person?
A paperclip maximizer won’t wirehead because it doesn’t value world states in which its goals have been satisfied, it values world states that have a lot of paperclips.
In fact, taboo ‘values’. A paperclip maximizer is an algorithm the output of which approximates whichever output leads to world states with the greatest expected number of paperclips. This is the template for maximizer-type AGIs in general.
- 9 Feb 2015 4:42 UTC; 1 point) 's comment on [LINK] Wait But Why—The AI Revolution Part 2 by (
If the universe is infinite, then there are infinitely many copies of me, following the same algorithm
Does this follow? The set of computable functions is infinite, but has no duplicate elements.
I think this should get better and better for P1 the closer P1 gets to (2/3)C (1/3)B (without actually reaching it).
Well, ok, but if you agree with this then I don’t see how you can claim that such a system would be particularly useful for solving FAI problems.
an Oracle AI you can trust
That’s a large portion of the FAI problem right there.
EDIT: To clarify, by this I don’t mean to imply that FAI is easy, but that (trustworthy) Oracle AI is hard.
I don’t think Harry meant to imply that actually running this test would be nice, but rather that one cannot even think of running this test without first thinking of the possibility of making a horcrux for someone else (something which is more-or-less nice-ish in itself, the amorality inherent in creating a horcrux at all notwithstanding).