Yes, I’m also very unsatisfied with most answers—though that includes my own.
My view of consciousness is that it’s not obvious what causes it, it’s not obvious we can know if LLM-based systems have it, and it’s unclear that it arises naturally in the majority of possible superintelligences. But even if I’m wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I’m not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space—and the longtermists seem to have bitten a few too many bullets on this front for my taste.
Yes, I’m also very unsatisfied with most answers—though that includes my own.
My view of consciousness is that it’s not obvious what causes it, it’s not obvious we can know if LLM-based systems have it, and it’s unclear that it arises naturally in the majority of possible superintelligences. But even if I’m wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I’m not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space—and the longtermists seem to have bitten a few too many bullets on this front for my taste.
Makes sense!