Eliezer: This actually kinda sounds (almost) like something I’d been thinking for a while, except that your version added one (well, many actually, but the one is one that’s useful in getting it to all add back up to normality) “dang, I should have thought” insight.
But I’m not sure if these are equivalent. Is this more or less what you were saying: “When we’re talking about ‘shouldness’, we mean something, or at least we think we mean something. It’s not something we can fully explicitly articulate, but if we could somehow fully utterly completely understand the operation of the brain, scan it, and somehow extract and process all the relevant data associated with that feeling of ‘shouldness’, we’d actually get a definition/defining computation/something that we could then work with to do more detailed analysis of morality, and the reason that would actually be ‘the computation we should care about’ is that’d actually be, well… the very bit of us that’s concerned with issues like that, more or less”?
If so, I’d say that what you have is useful, but not a full metamorality. I’d call it more a “metametamorality”, the metamorality would be what I’d know if I actually knew the specification of the computation, to some level of precision. To me, it seems like this answers alot, but does leave an important black box that needs to be opened. Although I concede that opening this box will be tricky. Good luck with that. :)
Anyways, I’d consider the knowledge I’d have from actually knowing a bit more about the specification of that computation a metamorality, and the outputs, well, morality.
Incidentally, the key thing that I missed that you helped me see was this: “Hey, that implicit definition of ‘shouldness’ sitting there in your brain structure isn’t just sitting there twiddling its thumbs. Where the heck do you think your moral feelings/suspicions/intuitions are coming from? it’s what’s computing them, however imprecisely. So you actually can trust, at least as a starting point those moral intuitions as an approximation to what that implicit definition implies.”
Eliezer: This actually kinda sounds (almost) like something I’d been thinking for a while, except that your version added one (well, many actually, but the one is one that’s useful in getting it to all add back up to normality) “dang, I should have thought” insight.
But I’m not sure if these are equivalent. Is this more or less what you were saying: “When we’re talking about ‘shouldness’, we mean something, or at least we think we mean something. It’s not something we can fully explicitly articulate, but if we could somehow fully utterly completely understand the operation of the brain, scan it, and somehow extract and process all the relevant data associated with that feeling of ‘shouldness’, we’d actually get a definition/defining computation/something that we could then work with to do more detailed analysis of morality, and the reason that would actually be ‘the computation we should care about’ is that’d actually be, well… the very bit of us that’s concerned with issues like that, more or less”?
If so, I’d say that what you have is useful, but not a full metamorality. I’d call it more a “metametamorality”, the metamorality would be what I’d know if I actually knew the specification of the computation, to some level of precision. To me, it seems like this answers alot, but does leave an important black box that needs to be opened. Although I concede that opening this box will be tricky. Good luck with that. :)
Anyways, I’d consider the knowledge I’d have from actually knowing a bit more about the specification of that computation a metamorality, and the outputs, well, morality.
Incidentally, the key thing that I missed that you helped me see was this: “Hey, that implicit definition of ‘shouldness’ sitting there in your brain structure isn’t just sitting there twiddling its thumbs. Where the heck do you think your moral feelings/suspicions/intuitions are coming from? it’s what’s computing them, however imprecisely. So you actually can trust, at least as a starting point those moral intuitions as an approximation to what that implicit definition implies.”