From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don’t exist) to be human rather than AIs.
PS: It’s a bit lame that this post had −27 karma without anybody providing a counterargument.
From the standpoint of hedonic utilitarianism, assigning a higher value to a future with moderately happy humans than to a future with very happy AIs would indeed be a case of unjustified speciesism. However, in preference utilitarianism, specifically person-affecting preference utilitarianism, there is nothing wrong with preferring our descendants (who currently don’t exist) to be human rather than AIs.
PS: It’s a bit lame that this post had −27 karma without anybody providing a counterargument.