“Morality is awesome”, as a statement, scans like “consent is sexy” to me. Neither of these statements are true enough to be useful except as signalling or a personal goal (“I would like to find X thing I believe to be moral more awesome, so as to hack my brain to be more moral”).
In some cases of assessing morality/awesomeness or consent/sexiness correlation, one would sometimes have to lie about their awesomeness/sexiness preferences, and ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.
“Morality is awesome”, as a statement, scans like “consent is sexy” to me.
It was secretly meant to be parsed the other way: “awesome is morality”. Sorry to confuse.
It’s not about signalling, it’s supposed to be an entirely personal thing.
It’s not about hacking your brain to find your current conception of morality more awesome either. It’s about flushing out your current conception of morality and rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.
In some cases of assessing … one would sometimes have to lie …in order to be a Perfectly Moral Good Individual who does not Like Evil Things.
I assume the capitals are about signaling “goodness”. Sometimes one will have to lie about what is actually moral, in order to appear “moral”. The awesomeness basis is orthogonal to this, except that it seems to make the difference between what is actually good and “morality” more explicit.
I assume the capitals are about signaling “goodness”
I use Meaningful Initial Caps to communicate tone, but recognize that it’s nonstandard. Sorry for any confusion.
So as far as I can tell, you’re saying that “awesomeness” is a good basis for noticing what one’s brain currently considers moral, so it can then rebuild its definitions from there.
To extend the metaphor, “sexiness is (perceived by the intuitive parts of your brain, absent intervention from moralizing or abstract-cognition parts, as) consent” is a good thing to pay attention to, so you can know what that part of you actually cares about, which gives you new information that isn’t simply from choosing a side on the “Sexiness is about evopsych and golden ratios and trading meat for sex!” versus “Sexiness is about communication and queer theory praxis and bucking stereotypes!” battle.
What I’m curious about is:
rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.
What, then, do you rebuild your current conception of morality from? “Blowing up people, when I have vague evidence that they’re mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome” seems like a philosophical cached thought to me. Do you think it’s something else?
Counterfactual terrorism—“but those mooks may not be mooks!”—isn’t a good tool for discerning actual bad ideas.
If I respond to “Consent is sexy!” by saying “But some of my brain doesn’t think that!”, noticing what those brainbits actually think, then change those brainbits to find sexy what I think of as “consent”, I’m not in a very different situation from the person who’s cheering blindly for consent being sexy. I just believe my premise more on the ground level, which will blind me to ways in which my preconceived notions of consent might suck.
In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?
I use Meaningful Initial Caps to communicate tone, but recognize that it’s nonstandard. Sorry for any confusion.
I recognize the idiom (I’ve read most of c2 wiki, and other places where such is used), just unsure how to parse it in this case. The closest match of “Perfectly Moral Good Individual” is a noun emphasizing apparent nature, rather than true nature.
Or did you mean “ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.” to be taken literally in the sense that you have to lie about something to be moral? That seems odd. Lie to who?
What, then, do you rebuild your current conception of morality from? “Blowing up people, when I have vague evidence that they’re mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome” seems like a philosophical cached thought to me. Do you think it’s something else?
Yes, it’s a cached thought, but one that has a solid justification that is easy to port. I have no trouble with bringing those over. The ones the “switch to awesome” procedure targets are cached thoughts like “I am confused about morality”, or the various bits of Deep Wisdom that act as the explosive in the philosophical landmine.
(Though of course many people in this thread managed to port their confusion and standard antiwisdom as well.)
The fact that you were forced to explicitly import “this is a bad idea because of X and Y” shows that it is generally working.
In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?
“Morality is awesome”, as a statement, scans like “consent is sexy” to me. Neither of these statements are true enough to be useful except as signalling or a personal goal (“I would like to find X thing I believe to be moral more awesome, so as to hack my brain to be more moral”).
In some cases of assessing morality/awesomeness or consent/sexiness correlation, one would sometimes have to lie about their awesomeness/sexiness preferences, and ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.
It was secretly meant to be parsed the other way: “awesome is morality”. Sorry to confuse.
It’s not about signalling, it’s supposed to be an entirely personal thing.
It’s not about hacking your brain to find your current conception of morality more awesome either. It’s about flushing out your current conception of morality and rebuilding it from intuition without interference from Deep Wisdom or philosophical cached thoughts.
I assume the capitals are about signaling “goodness”. Sometimes one will have to lie about what is actually moral, in order to appear “moral”. The awesomeness basis is orthogonal to this, except that it seems to make the difference between what is actually good and “morality” more explicit.
I use Meaningful Initial Caps to communicate tone, but recognize that it’s nonstandard. Sorry for any confusion.
So as far as I can tell, you’re saying that “awesomeness” is a good basis for noticing what one’s brain currently considers moral, so it can then rebuild its definitions from there.
To extend the metaphor, “sexiness is (perceived by the intuitive parts of your brain, absent intervention from moralizing or abstract-cognition parts, as) consent” is a good thing to pay attention to, so you can know what that part of you actually cares about, which gives you new information that isn’t simply from choosing a side on the “Sexiness is about evopsych and golden ratios and trading meat for sex!” versus “Sexiness is about communication and queer theory praxis and bucking stereotypes!” battle.
What I’m curious about is:
What, then, do you rebuild your current conception of morality from? “Blowing up people, when I have vague evidence that they’re mooks of the Forces of Evil, by the dozens, is a bad idea, even though it seems awesome” seems like a philosophical cached thought to me. Do you think it’s something else?
Counterfactual terrorism—“but those mooks may not be mooks!”—isn’t a good tool for discerning actual bad ideas.
If I respond to “Consent is sexy!” by saying “But some of my brain doesn’t think that!”, noticing what those brainbits actually think, then change those brainbits to find sexy what I think of as “consent”, I’m not in a very different situation from the person who’s cheering blindly for consent being sexy. I just believe my premise more on the ground level, which will blind me to ways in which my preconceived notions of consent might suck.
In other words, both my intuitive models of awesomeness and my explicit models of morality might be lame in many invisible ways. What then?
I recognize the idiom (I’ve read most of c2 wiki, and other places where such is used), just unsure how to parse it in this case. The closest match of “Perfectly Moral Good Individual” is a noun emphasizing apparent nature, rather than true nature.
Or did you mean “ignore those preferences in order to be a Perfectly Moral Good Individual who does not Like Evil Things.” to be taken literally in the sense that you have to lie about something to be moral? That seems odd. Lie to who?
Yes, it’s a cached thought, but one that has a solid justification that is easy to port. I have no trouble with bringing those over. The ones the “switch to awesome” procedure targets are cached thoughts like “I am confused about morality”, or the various bits of Deep Wisdom that act as the explosive in the philosophical landmine.
(Though of course many people in this thread managed to port their confusion and standard antiwisdom as well.)
The fact that you were forced to explicitly import “this is a bad idea because of X and Y” shows that it is generally working.
Not sure what you are getting at here.