I like that you’re discussing the question of purpose in a world where intelligences way smarter than you are doing all the useful knowledge work, and you are useless to your civilisation as a result. The frontier intelligences might have purpose (or they might not even care if they do), but you might not.
I was disappointed by Bostrom’s answer of “let’s just create more happy beings”.
Also, what makes those beings happy in the first place? Have the purpose of making even more beings? I don’t actually care that much about populating the universe with beings with self-replication as their only purpose.
His entire book “Deep Utopia” seems to not provide a good answer.
I was also disappointed by Yudkowsky’s answer of “let’s ask the AI to run a gazillion mind simulations and have it figure it out on our behalf”.
This answer might be a good answer, but it is too meta, and does not tell me what the final output of such a process will be, in a way I as a human from 2025 can relate to.
I like Holden Karnofsky’s take on why describing utopia goes badly. I was not very satisfied with his answer to what utopia looks like either.
What is the use of social interactions if there’s no useful project people can do together? Producing art and knowledge will both be done better by frontier intelligence, anything these people produce will be useless to society.
For that matter, why will people bother with social interactions with other biological humans anyway? The AI will be better at that too.
(Mandatory disclaimer—I think Karnofsky is accelerating us towards human extinction, me endorsing an idea of his does not mean me endorsing his actions.)
Yes, I’m also very unsatisfied with most answers—though that includes my own.
My view of consciousness is that it’s not obvious what causes it, it’s not obvious we can know if LLM-based systems have it, and it’s unclear that it arises naturally in the majority of possible superintelligences. But even if I’m wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I’m not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space—and the longtermists seem to have bitten a few too many bullets on this front for my taste.
I like that you’re discussing the question of purpose in a world where intelligences way smarter than you are doing all the useful knowledge work, and you are useless to your civilisation as a result. The frontier intelligences might have purpose (or they might not even care if they do), but you might not.
I was disappointed by Bostrom’s answer of “let’s just create more happy beings”.
Also, what makes those beings happy in the first place? Have the purpose of making even more beings? I don’t actually care that much about populating the universe with beings with self-replication as their only purpose.
His entire book “Deep Utopia” seems to not provide a good answer.
I was also disappointed by Yudkowsky’s answer of “let’s ask the AI to run a gazillion mind simulations and have it figure it out on our behalf”.
This answer might be a good answer, but it is too meta, and does not tell me what the final output of such a process will be, in a way I as a human from 2025 can relate to.
I like Holden Karnofsky’s take on why describing utopia goes badly. I was not very satisfied with his answer to what utopia looks like either.
What is the use of social interactions if there’s no useful project people can do together? Producing art and knowledge will both be done better by frontier intelligence, anything these people produce will be useless to society.
For that matter, why will people bother with social interactions with other biological humans anyway? The AI will be better at that too.
(Mandatory disclaimer—I think Karnofsky is accelerating us towards human extinction, me endorsing an idea of his does not mean me endorsing his actions.)
Yes, I’m also very unsatisfied with most answers—though that includes my own.
My view of consciousness is that it’s not obvious what causes it, it’s not obvious we can know if LLM-based systems have it, and it’s unclear that it arises naturally in the majority of possible superintelligences. But even if I’m wrong, my view of population ethics leans towards saying it being bad to create things that displace current beings over our objections, even if they are very happy. (And I think most of these futures end up with involuntary displacement.) In addition, I’m not fully anthropocentric, but I also probably care less for the happiness of beings that are extremely remote from myself in mind-space—and the longtermists seem to have bitten a few too many bullets on this front for my taste.
Makes sense!