There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the “charity percentage” so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn’t just change the charity value to 0 (which with a sufficiently general optimizer can’t be prevented by simple measures like just “hard-coding” it), or otherwise screw up its own (explicit or implicit) utility function.
If you are capable of doing all that, you may as well make a proper FAI.
With all of them? How so?
There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the “charity percentage” so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn’t just change the charity value to 0 (which with a sufficiently general optimizer can’t be prevented by simple measures like just “hard-coding” it), or otherwise screw up its own (explicit or implicit) utility function.
If you are capable of doing all that, you may as well make a proper FAI.