This exact attitude is rare. Much more common is the “let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values” attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).
Also, he isn’t saying humanity will be overlooked, but cheaply taken care of as specimens, zoo/nature reserve animals, and possibly ransom or to get in good with more powerful protectors of humanity (aliens or simulators). Or that AIs that don’t care about us will be successfully constrained.
Much more common is the “let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values” attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).
Most proponents of the view in connection with AI, in my experience, don’t seem to use the term or be familiar with Cattell. He’s more associated with genetic enhancement, e.g. Jim Flynn (of the Flynn Effect) discusses and rejects Cattell’s views in his book on moral philosophy and empirical knowledge, “How to defend humane ideals.”
FWIW, I pickled up the term from Roko—who occasionally talked about “beyondist transhumanism”. Cattell’s “beyondism” seems to be frequently compared to social Darwinism.
This another example of method of thinking I dislike—thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it’s charity which is set at constant percentage point. Someone else winning doesn’t imply you are losing.
There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the “charity percentage” so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn’t just change the charity value to 0 (which with a sufficiently general optimizer can’t be prevented by simple measures like just “hard-coding” it), or otherwise screw up its own (explicit or implicit) utility function.
If you are capable of doing all that, you may as well make a proper FAI.
This exact attitude is rare. Much more common is the “let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values” attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).
Also, he isn’t saying humanity will be overlooked, but cheaply taken care of as specimens, zoo/nature reserve animals, and possibly ransom or to get in good with more powerful protectors of humanity (aliens or simulators). Or that AIs that don’t care about us will be successfully constrained.
That is often known as “Beyondism”.
Most proponents of the view in connection with AI, in my experience, don’t seem to use the term or be familiar with Cattell. He’s more associated with genetic enhancement, e.g. Jim Flynn (of the Flynn Effect) discusses and rejects Cattell’s views in his book on moral philosophy and empirical knowledge, “How to defend humane ideals.”
FWIW, I pickled up the term from Roko—who occasionally talked about “beyondist transhumanism”. Cattell’s “beyondism” seems to be frequently compared to social Darwinism.
This another example of method of thinking I dislike—thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it’s charity which is set at constant percentage point. Someone else winning doesn’t imply you are losing.
What you describe is arguably already a (mediocre) FAI, with all the attendant challenges.
With all of them? How so?
There are two main challenges: complexity of human values and safe self-modification. In order to correctly define the “charity percentage” so that what the AI leaves us is actually desirable, you need to be able to define human values about as well as a full FAI. Self-modification safety is needed so that it doesn’t just change the charity value to 0 (which with a sufficiently general optimizer can’t be prevented by simple measures like just “hard-coding” it), or otherwise screw up its own (explicit or implicit) utility function.
If you are capable of doing all that, you may as well make a proper FAI.