Much more common is the “let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values” attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).
Most proponents of the view in connection with AI, in my experience, don’t seem to use the term or be familiar with Cattell. He’s more associated with genetic enhancement, e.g. Jim Flynn (of the Flynn Effect) discusses and rejects Cattell’s views in his book on moral philosophy and empirical knowledge, “How to defend humane ideals.”
FWIW, I pickled up the term from Roko—who occasionally talked about “beyondist transhumanism”. Cattell’s “beyondism” seems to be frequently compared to social Darwinism.
That is often known as “Beyondism”.
Most proponents of the view in connection with AI, in my experience, don’t seem to use the term or be familiar with Cattell. He’s more associated with genetic enhancement, e.g. Jim Flynn (of the Flynn Effect) discusses and rejects Cattell’s views in his book on moral philosophy and empirical knowledge, “How to defend humane ideals.”
FWIW, I pickled up the term from Roko—who occasionally talked about “beyondist transhumanism”. Cattell’s “beyondism” seems to be frequently compared to social Darwinism.