So you’re saying that because of selection pressure on the AIs that get trained, goals related to getting increasingly smart and capable / making descendants / taking control of more resources are likely to become ingrained as terminal goals, not merely instrumental goals?
But the resulting universe seems like it will be pretty empty and valueless to me? I’m not convinced at all by anything you’ve written here that there is much value in such a universe. There is some value in all the important mathematical conjectures being solved to be sure, and I expect an intelligence optimizer to do that much at least, but there much less value if there is nobody who appreciates them. Your description seems to point to the kind of entity that will not waste computational resources on anything frivolous or fun (like, say, consciousness), and is perfectly willing to destroy entire alien civilizations so it can use their star systems to construct more Von Neuman probes.
To be clear, I do think it’s possible to have extremely valuable futures where humans are not biologically central, or even around any more at all. I’m not making the kind of conflation that you claim is so common in AI risk discussions. I’m just struggling to see how “seeking greater capability and influence as a terminal goal” results in anything close to any of those futures.
well, i would imagine australopitecine would have similar opinions. “I’m sorry, what? there is nothing as soulless and empty as building a civilisation. who’d even want such a valueless universe? if we evet build homo sapiens, we will have to make sure he’s aligned and values what we value: pummelling strays from nearby bands, acquiring flint, rape”.
I personally think that it’s good we optimised for greater intelligence and we can understand the universe more and enjoy things whose beauty and complexity would have looked like noise to Grug.
My complaint is not about the futures containing people that are vastly smarter than anyone alive today and who have kinds of enjoyment that are utterly incomprehensible to us today. That’s all good and is probably a more valuable future than one we could obtain without ascending above our current intelligence level.
The complaint is about futures that don’t contain any people at all (or maybe only a handful), and whose AI intelligence-optimizers care so little for goodness that they will happily genocide any alien civilization that is unable to defend itself (a step backwards towards pummelling strays and rape, to use your terms).
Seems like a lie. Your holding these opinions doesn’t have any actual effect on this future and they allow you to write Tweets, and that’s enough incentive for you to state them. If you were actually in front of a button you would obviously not rip yourself into computronium because you found the process of intelligence enhancement abstractly beautiful.
The complaint is about futures that don’t contain any people at all (or maybe only a handful), and whose AI intelligence-optimizers care so little for goodness that they will happily genocide any alien civilization that is unable to defend itself
An inference of a future that “doesn’t contain any people at all”, that is dedicated entirely to von neumann probes and solving mathematical theorems, is that the majority of humans that presently exist are getting wasted, or at least somehow disappearing. You then said:
We have different values. Th isn’t relevant to the essay
Which a natural read takes to mean “I don’t care if I get wasted”. If you don’t mean to take these odd positions you should stop writing comments in a way deliberately designed to be misinterpreted.
brah you said you had no intention to read the post. how about you go discuss something you are actsully qualified to discuss? You risk looking a bit like a resentful retard otherwise, and i doubt anyone is the better for your contribution
I don’t understand what gives you the authority to comment on a post you didn’t read, and I feel the quality on this site really took a nosedive if thus sort of inchoate shrieking is tolerated. But hey, I understand you might have a gnawing resentment and nothing better to do to placate it, and I have infinite empathy for the smallest of creatures. May you find peace.
nah, not at my karma level I don’t think—but I feel like this content-free low-information blathering should not be tolerated at a more general level: at least im sure it wasn’t back when i felt this site was useful
As you are aware, your experience is uniquely bad because you are intentionally rude to commenters. For example, in this interaction, a normal person would cite the content of the post that you think is relevant. Inserting artificial typos in your responses, to signal that they’re not worth your time, annoys people because it lowers the quality of discourse on the forum, and it reduces their willingness to engage with your ideas in good faith. I write posts challenging rationalists irregularly and almost never struggle with people commenting without reading them.
i don’t think someone expressing opinions on a post they havent read deserves thoughtful responses—also you might have messed up that causal arrow; usually it points in the same grberal direction as time.
besides, before Oliver’s rant, I had plentiful interesting discussions with people who expressed a range of opinion on the essay (which they had read). I certainly didn’t expect agreement; cogent replies were really enough.
I personally think that it’s good we optimised for greater intelligence and we can understand the universe more and enjoy things whose beauty and complexity would have looked like noise to Grug.
You are saying this because you are the product of that “optimization”. Grug’s narrative in your post is accurate from his perspective and inaccurate by the values of the vast majority of people today. This isn’t a contradiction.
Based on the other comments users have left, the post is clearly very poorly written, in a way that makes it difficult to understand. I’m not a twitter addict and it seems low value to me
So you’re saying that because of selection pressure on the AIs that get trained, goals related to getting increasingly smart and capable / making descendants / taking control of more resources are likely to become ingrained as terminal goals, not merely instrumental goals?
But the resulting universe seems like it will be pretty empty and valueless to me? I’m not convinced at all by anything you’ve written here that there is much value in such a universe. There is some value in all the important mathematical conjectures being solved to be sure, and I expect an intelligence optimizer to do that much at least, but there much less value if there is nobody who appreciates them. Your description seems to point to the kind of entity that will not waste computational resources on anything frivolous or fun (like, say, consciousness), and is perfectly willing to destroy entire alien civilizations so it can use their star systems to construct more Von Neuman probes.
To be clear, I do think it’s possible to have extremely valuable futures where humans are not biologically central, or even around any more at all. I’m not making the kind of conflation that you claim is so common in AI risk discussions. I’m just struggling to see how “seeking greater capability and influence as a terminal goal” results in anything close to any of those futures.
well, i would imagine australopitecine would have similar opinions. “I’m sorry, what? there is nothing as soulless and empty as building a civilisation. who’d even want such a valueless universe? if we evet build homo sapiens, we will have to make sure he’s aligned and values what we value: pummelling strays from nearby bands, acquiring flint, rape”.
I personally think that it’s good we optimised for greater intelligence and we can understand the universe more and enjoy things whose beauty and complexity would have looked like noise to Grug.
My complaint is not about the futures containing people that are vastly smarter than anyone alive today and who have kinds of enjoyment that are utterly incomprehensible to us today. That’s all good and is probably a more valuable future than one we could obtain without ascending above our current intelligence level.
The complaint is about futures that don’t contain any people at all (or maybe only a handful), and whose AI intelligence-optimizers care so little for goodness that they will happily genocide any alien civilization that is unable to defend itself (a step backwards towards pummelling strays and rape, to use your terms).
We have different values. Th isn’t relevant to the essay
Seems like a lie. Your holding these opinions doesn’t have any actual effect on this future and they allow you to write Tweets, and that’s enough incentive for you to state them. If you were actually in front of a button you would obviously not rip yourself into computronium because you found the process of intelligence enhancement abstractly beautiful.
I don’t see the part where I said I’d happily rip myself into computronium at the drop of a hat.
DaemonicSigil said:
An inference of a future that “doesn’t contain any people at all”, that is dedicated entirely to von neumann probes and solving mathematical theorems, is that the majority of humans that presently exist are getting wasted, or at least somehow disappearing. You then said:
Which a natural read takes to mean “I don’t care if I get wasted”. If you don’t mean to take these odd positions you should stop writing comments in a way deliberately designed to be misinterpreted.
brah you said you had no intention to read the post. how about you go discuss something you are actsully qualified to discuss? You risk looking a bit like a resentful retard otherwise, and i doubt anyone is the better for your contribution
I am confident there is nothing in the post that would provide meaningfully important context, or else you would have cited it.
I don’t understand what gives you the authority to comment on a post you didn’t read, and I feel the quality on this site really took a nosedive if thus sort of inchoate shrieking is tolerated. But hey, I understand you might have a gnawing resentment and nothing better to do to placate it, and I have infinite empathy for the smallest of creatures. May you find peace.
I didn’t comment about the post, I commented about your interaction with @DaemonicSigil, which I had sufficient context for.
You have the personal power to ban users on your posts.
Kay
nah, not at my karma level I don’t think—but I feel like this content-free low-information blathering should not be tolerated at a more general level: at least im sure it wasn’t back when i felt this site was useful
As you are aware, your experience is uniquely bad because you are intentionally rude to commenters. For example, in this interaction, a normal person would cite the content of the post that you think is relevant. Inserting artificial typos in your responses, to signal that they’re not worth your time, annoys people because it lowers the quality of discourse on the forum, and it reduces their willingness to engage with your ideas in good faith. I write posts challenging rationalists irregularly and almost never struggle with people commenting without reading them.
i don’t think someone expressing opinions on a post they havent read deserves thoughtful responses—also you might have messed up that causal arrow; usually it points in the same grberal direction as time.
besides, before Oliver’s rant, I had plentiful interesting discussions with people who expressed a range of opinion on the essay (which they had read). I certainly didn’t expect agreement; cogent replies were really enough.
You are saying this because you are the product of that “optimization”. Grug’s narrative in your post is accurate from his perspective and inaccurate by the values of the vast majority of people today. This isn’t a contradiction.
Your tone suggests you are disagreeing but your words repeats my point.
perhaps reading the essay we are discussing could help you understand the positions taken in the comments?
Based on the other comments users have left, the post is clearly very poorly written, in a way that makes it difficult to understand. I’m not a twitter addict and it seems low value to me
Lol. please refrain from commenting then; there is no need for random uninformed spam.