CEV, Idealized reflection, viatopia are all obvious and just a can-kicking circlejerk. Anti realism is true and this changes nothing. The religious undertones that there is some sort of convergent nirvana once you think hard enough is not true. Cosmopolitianism is only better than heroin tiled rats if you assume certain axioms. You should listen to smarter wiser people, duh. We all know this already. How is this profound when applied to normative ethics.
Agreed that the ideas are kind of obvious (from a certain rationalist perspective); nonetheless they are : 1. not widely known outside of rationalist circles, where most people might consider “utopia” to just mean some really mundane thing like “tax billionares enough to provide subsidized medicaid for all” rather than defeating death and achieving other assorted transhumanist treasures 2. potentially EXTREMELY important for the long-term future of civilization
In this regard they seem similar to the idea of existential risk, or the idea that AI might be a really important and pivotal technology—really really obvious in retrospect, yet underrated in broader societal discourse and potentially extremely important.
Unlike AI & x-risk, I think people who talk about CEV and viatopia have so far done an unimpressive job of exploring how those philosophical ideas about the far-future should be translated into relevant action today. (So many AI safety orgs, billion-dollar companies getting founded, government initiatives launched, lots of useful research and lobbying and etc getting done—there is no similar game plan for promoting “viatopia” as far as I know!)
”The religious undertones that there is some sort of convergent nirvana once you think hard enough is not true.”—can you argue for this in a convincing and detailed way? If so, that would be exciting—you would be contributing a very important step towards making concrete progress in thinking about CEV / etc, the exact tractability problem I was just complaining about!! But if you are just asserting a personal vibe without actual evidence or detailed arguments to back it up, then I’d not baldly assert ”...is not true”.
I also want to separately add that part of my frustration here (and the “can-kicking” part i mention) is that I worry this is just going to be weaponized as a reason to keep EA and LW glued together, even as obvious cracks develop. That would be fine—if we had a democracy—but we don’t. So at some point glue is a weapon for those in the community with de facto control to keep trudging forward without having to account for the increasing differences in moral views of those within.
Actually, they are extremely well known outside of rationalist circles. Many subgroups of the Jewish and Buddist faiths are pretty much built upon these principles. My parents told me “don’t put all your chips on the table” and ~keep optionality open. Some might even argue this is the core principle that has led to “democracy”. And yes as you rightly mentioned these are clearly foundational principles behind LW and EA. That’s why I use the strong language of “circlejerk”. This is really unnecessarily reinventing common english phrases. Viatopia perhaps gives the idea a bit of an action relevant flavor so I guess it extends a bit beyond the others, still not particularly new or insightful.
“can you argue for this in a convincing and detailed way?”
I mean the argument is so underpowered it’s hard to even know where to start. I actually don’t even think the concept is coherent tbf but I’ll try.
Assuming you are coming from the view that: you can take some sentient (or intelligent) being, and you keep the “essence” of that sentient being but make it smarter and give it more inference time, that then all sentient beings will start whipping, dabbing, and hitting the nae nae in synchronicity.
(Which I would say there is no coherent concept of self modification/enchancement that preserves the original essence so already meaningless but if I cast that aside. )
Then sure, take a sentient beings whose value function is completely determined. It can never change it’s mind, taughtologically. So it will never hit this convergent nirvana. it’s values are already fixed.
I must be confused because I don’t see how this could be any other way. And the funny thing is, even if i’m wrong about this, and somehow if you jack up the iq and inference to wazooh and the atoms start vibing out this still wouldn’t make their goals correct. You still haven’t solved the is ought problem.
CEV, Idealized reflection, viatopia are all obvious and just a can-kicking circlejerk. Anti realism is true and this changes nothing. The religious undertones that there is some sort of convergent nirvana once you think hard enough is not true. Cosmopolitianism is only better than heroin tiled rats if you assume certain axioms. You should listen to smarter wiser people, duh. We all know this already. How is this profound when applied to normative ethics.
Agreed that the ideas are kind of obvious (from a certain rationalist perspective); nonetheless they are :
1. not widely known outside of rationalist circles, where most people might consider “utopia” to just mean some really mundane thing like “tax billionares enough to provide subsidized medicaid for all” rather than defeating death and achieving other assorted transhumanist treasures
2. potentially EXTREMELY important for the long-term future of civilization
In this regard they seem similar to the idea of existential risk, or the idea that AI might be a really important and pivotal technology—really really obvious in retrospect, yet underrated in broader societal discourse and potentially extremely important.
Unlike AI & x-risk, I think people who talk about CEV and viatopia have so far done an unimpressive job of exploring how those philosophical ideas about the far-future should be translated into relevant action today. (So many AI safety orgs, billion-dollar companies getting founded, government initiatives launched, lots of useful research and lobbying and etc getting done—there is no similar game plan for promoting “viatopia” as far as I know!)
”The religious undertones that there is some sort of convergent nirvana once you think hard enough is not true.”—can you argue for this in a convincing and detailed way? If so, that would be exciting—you would be contributing a very important step towards making concrete progress in thinking about CEV / etc, the exact tractability problem I was just complaining about!! But if you are just asserting a personal vibe without actual evidence or detailed arguments to back it up, then I’d not baldly assert ”...is not true”.
I also want to separately add that part of my frustration here (and the “can-kicking” part i mention) is that I worry this is just going to be weaponized as a reason to keep EA and LW glued together, even as obvious cracks develop. That would be fine—if we had a democracy—but we don’t. So at some point glue is a weapon for those in the community with de facto control to keep trudging forward without having to account for the increasing differences in moral views of those within.
Actually, they are extremely well known outside of rationalist circles. Many subgroups of the Jewish and Buddist faiths are pretty much built upon these principles. My parents told me “don’t put all your chips on the table” and ~keep optionality open. Some might even argue this is the core principle that has led to “democracy”. And yes as you rightly mentioned these are clearly foundational principles behind LW and EA. That’s why I use the strong language of “circlejerk”. This is really unnecessarily reinventing common english phrases. Viatopia perhaps gives the idea a bit of an action relevant flavor so I guess it extends a bit beyond the others, still not particularly new or insightful.
“can you argue for this in a convincing and detailed way?”
I mean the argument is so underpowered it’s hard to even know where to start. I actually don’t even think the concept is coherent tbf but I’ll try.
Assuming you are coming from the view that:
you can take some sentient (or intelligent) being, and you keep the “essence” of that sentient being but make it smarter and give it more inference time, that then all sentient beings will start whipping, dabbing, and hitting the nae nae in synchronicity.
(Which I would say there is no coherent concept of self modification/enchancement that preserves the original essence so already meaningless but if I cast that aside. )
Then sure, take a sentient beings whose value function is completely determined. It can never change it’s mind, taughtologically. So it will never hit this convergent nirvana. it’s values are already fixed.
I must be confused because I don’t see how this could be any other way. And the funny thing is, even if i’m wrong about this, and somehow if you jack up the iq and inference to wazooh and the atoms start vibing out this still wouldn’t make their goals correct. You still haven’t solved the is ought problem.