It’s a good sign that you think it’s going over your head. Most of the time when people think “metaethics” (or metaphysics, or even normal ethics) is going over their head, it’s a sign that it’s actually nonsense. It’s always awkward trying to disprove nonsense, as dope Australian philosopher David Stove argued (http://web.maths.unsw.edu.au/~jim/wrongthoughts.html).
EY and a few others have the ability to gaze into the technological machinations of a distant future involving artificial minds and their interactions with the coolest theoretical physics engineering feats. It’s clear to all smart people this is an inevitable conclusion, right? Fine.
Okay going back to ethics, the point of the post (sorry, metaethics). Moral relativism is nothing more than a concept we hold in our minds. Insofar as it classifies different human beliefs on the world, and predicts their actions, it’s a useful term. It has no particularly profound meaning otherwise. It’s nothing more than a personal belief on how others should behave. You can’t test moral relativism, it has no fundamental property. The closest you can get to testing it is, as I just noted, asking how it predicts different human behaviors.
Again, you tried to break this down which is understandable. But it’s not possible to refute or breakdown absolute nonsense. Some paperclip maximize doesn’t have values? So it won’t respond to some sort of ‘argument’ (which is a anthromorphic nonsensical set of information for the paperclip maximizer). And somehow this now connects to an argument that some other species will have some value, but it might be a bad one?
Please let me know if you think I’m missing something, or some context from previous stuff he’s written that changed my interpretation of the writing above.
It’s a good sign that you think it’s going over your head. Most of the time when people think “metaethics” (or metaphysics, or even normal ethics) is going over their head, it’s a sign that it’s actually nonsense. It’s always awkward trying to disprove nonsense, as dope Australian philosopher David Stove argued (http://web.maths.unsw.edu.au/~jim/wrongthoughts.html).
EY and a few others have the ability to gaze into the technological machinations of a distant future involving artificial minds and their interactions with the coolest theoretical physics engineering feats. It’s clear to all smart people this is an inevitable conclusion, right? Fine.
Okay going back to ethics, the point of the post (sorry, metaethics). Moral relativism is nothing more than a concept we hold in our minds. Insofar as it classifies different human beliefs on the world, and predicts their actions, it’s a useful term. It has no particularly profound meaning otherwise. It’s nothing more than a personal belief on how others should behave. You can’t test moral relativism, it has no fundamental property. The closest you can get to testing it is, as I just noted, asking how it predicts different human behaviors.
Again, you tried to break this down which is understandable. But it’s not possible to refute or breakdown absolute nonsense. Some paperclip maximize doesn’t have values? So it won’t respond to some sort of ‘argument’ (which is a anthromorphic nonsensical set of information for the paperclip maximizer). And somehow this now connects to an argument that some other species will have some value, but it might be a bad one?
Please let me know if you think I’m missing something, or some context from previous stuff he’s written that changed my interpretation of the writing above.