A lot of hypotheses about autism involve… wait for it… amygdala abnormality.
Autism gets way over-emphasized here and elsewhere as a catch-all diagnosis for mental oddity. Schizotypality and obsessive-compulsive spectrum conditions are just as common near the far right of the rationalist ability curve. (Both of those are also associated with lots of pertinent abnormalities of the insula, anterior cingulate cortex, dorsolateral prefrontal cortex, et cetera. However I’ve found that fMRI studies tend to be relatively meaningless and shouldn’t be taken too seriously; it’s not uncommon for them to contradict each other despite high claimed confidence.)
I’m someone who “talks (or reads) myself into” new moral positions pretty regularly and thus could possibly be considered an interesting case study. I got an fMRI done recently and can probably persuade the researchers to give me a summary of their subsequent analysis. My brain registered absolutely no visible change during the two hours of various tasks I did while in the fMRI (though you could see my eyes moving around so it was clearly working); the guy sounded somewhat surprised at this but said that things would show up once the data gets sent to the lab for analysis. I wonder if that’s common. (At the time I thought, “maybe that’s because I always feel like I’m being subjected to annoying trivial tests of my ability to jump through pointless hoops” but besides sounding cool that’s probably not accurate.) Anyway, point is, I don’t yet know what they found.
(I’m not sure I’ll ever be able to substantiate the following claim except by some day citing people who agree with me, ’cuz it’s an awkward subject politically, but: I think the evidence clearly shows that strong aneurotypicality is necessary but not sufficient for being a strong rationalist. The more off-kilter your mind is the more likely you are to just be crazy, but the more likely you are to be a top tier rationalist, up to the point where the numbers get rarer than one per billion. There are only so many OCD-schizotypal IQ>160 folk. I didn’t state that at all clearly but you get the gist, maybe.)
Can you talk about about some of the arguments that lead you to taking new moral positions? Obviously I’m not interested in cases where new facts changed how you thought ethics should be applied but cases where your ‘terminal values’ changed in response to something.
That’s difficult because I don’t really believe in ‘terminal values’, so everything looks like “new facts” that change how my “ethics” should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your ‘terminal values’ enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my ‘terminal values’ is ‘thinking things through from first principles’, but once you’re that abstract and that meta it’s unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like ‘going meta’ or ‘justification for values must be even better supported than justification for beliefs’ or ‘arbitrariness is bad’. So it’s not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person’s paradigm shift, it gets hard to communicate in a natural way. Like, for the ‘levels of organization’ flavor of levels of abstraction, the difference between “I love Jane more than any other woman and would trade the world for her” and “I love humanity more than other memeplex instantiation and would trade the multiverse for it”. It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it’s like they’d be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don’t give people an excuse to not notice confusion, Less Wrong really doesn’t have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don’t think this is ever going to change. But maybe five years’ worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won’t even do that. dlakghjadokghaoghaok. Ahem.
Like, there’s a point at which object level uncertainty looks like “should I act as if I am being judged by agents with imperfect knowledge of the context of my decisions or should I act as if I am being judged by an omniscient agent or should I act as if I need to appease both simultaneously or …”; you can go meta here in the abstract to answer this object level moral problem, but one of my many points is that at this point it just looks nothing like ‘is killing good or bad?’ or ‘should I choose for the Nazis kill my son, or my daughter (considering they’ve forced this choice upon me)?’.
‘should I choose for the Nazis kill my son, or my daughter (considering they’ve forced this choice upon me)?’
I remember that when I was like 11 years old I used to lie awake at night obsessing about variations on Sophie’s choice problems. Those memories are significantly more vivid than my memories of living off ramen and potatoes with no electricity for a few months at around the same age. (I remember thinking that by far the worst part of this was the cold showers, though I still feel negative affect towards ramen (and eggs, which were also cheap).) I feel like that says something about my psychology.
You know, it isn’t actually any more descriptive to write out what ACC and DLPFC stand for, since if people know anything about them they already know their acronyms, but writing them out signals that I know that most people don’t know what ACC and DLPFC stand for and am penitent that I’m not bothering to link to their respective Wikipedia articles. I hate having to constantly jump through such stupid signalling hoops.
I can tell from “anterior cingulate cortext” that you are talking about a part of the brain, even though I haven’t heard of that part before. (I may have been able to tell from context that much about “ACC”, but it would have been more work, and I would have been less confident.)
And compare the Google searches for ACC and anterior cingulate cortext. It is nice to get a more relevant first search result than “Atlantic Coast Conference Official Athletic Site”.
It’s rare that people bother to learn new things on their own whereas it’s common for them to punish people that make it trivially more difficult for them to counterfactually do so, even though they wouldn’t even have been primed to want to do so if that person hadn’t brought up the subject. That’s the thing I’m complaining about. (bla bla marginal cost something bla opportunity costs b;’dja. disclaimer disclaimer disclaimer.)
This might make you feel better: There is a part of every reader that cares about the subjective experience of reading. If you propitiate that part by writing things that are a pleasure to read, they’ll be more likely to read what you say.
FWIW, I have a passing familiarity from long ago with both terms, and none with the acronyms. I would have been mystified if you’d written ACC and probably would not have been able to figure out what you’re talking about, given some quick googling. Though DLPFC could probably have gotten me there after googling that.
It’s certainly easier to look up terms instead of abbreviations, and even moreso years later. People using abbreviations that have since fallen out of use is one of my pet peeves when reading older papers.
Right, but “a passing familiarity from long ago” != “knowing anything about them” in my mind. (Obviously I should have used a less hyperbolic phrase.) OTOH I wasn’t aware of the phenomenon where abbreviations often fall out of use. I think the DLPFC was only carved out of conceptspace in the last decade, both the idea and its abbreviation, which does indicate that the inverse problem might be common for quickly advancing fields like neuroscience. (ETA: So, I was wrong to see this as an example of the thing I was complaining about. (I don’t think I was wrong to complain about that thing but only in a deontological sense; in a consequentialist sense I was wrong there too.))
Autism gets way over-emphasized here and elsewhere as a catch-all diagnosis for mental oddity. Schizotypality and obsessive-compulsive spectrum conditions are just as common near the far right of the rationalist ability curve. (Both of those are also associated with lots of pertinent abnormalities of the insula, anterior cingulate cortex, dorsolateral prefrontal cortex, et cetera. However I’ve found that fMRI studies tend to be relatively meaningless and shouldn’t be taken too seriously; it’s not uncommon for them to contradict each other despite high claimed confidence.)
I’m someone who “talks (or reads) myself into” new moral positions pretty regularly and thus could possibly be considered an interesting case study. I got an fMRI done recently and can probably persuade the researchers to give me a summary of their subsequent analysis. My brain registered absolutely no visible change during the two hours of various tasks I did while in the fMRI (though you could see my eyes moving around so it was clearly working); the guy sounded somewhat surprised at this but said that things would show up once the data gets sent to the lab for analysis. I wonder if that’s common. (At the time I thought, “maybe that’s because I always feel like I’m being subjected to annoying trivial tests of my ability to jump through pointless hoops” but besides sounding cool that’s probably not accurate.) Anyway, point is, I don’t yet know what they found.
(I’m not sure I’ll ever be able to substantiate the following claim except by some day citing people who agree with me, ’cuz it’s an awkward subject politically, but: I think the evidence clearly shows that strong aneurotypicality is necessary but not sufficient for being a strong rationalist. The more off-kilter your mind is the more likely you are to just be crazy, but the more likely you are to be a top tier rationalist, up to the point where the numbers get rarer than one per billion. There are only so many OCD-schizotypal IQ>160 folk. I didn’t state that at all clearly but you get the gist, maybe.)
Can you talk about about some of the arguments that lead you to taking new moral positions? Obviously I’m not interested in cases where new facts changed how you thought ethics should be applied but cases where your ‘terminal values’ changed in response to something.
That’s difficult because I don’t really believe in ‘terminal values’, so everything looks like “new facts” that change how my “ethics” should be applied. (ETA: Like, falling in love with a new girl or a new piece of music can look like learning a new fact about the world. This perspective makes more sense after reading the rest of my comment.) Once you change your ‘terminal values’ enough they stop looking so terminal and you start to get a really profound respect for moral uncertainty and the epistemic nature of shouldness. My morality is largely directed at understanding itself. So you could say that one of my ‘terminal values’ is ‘thinking things through from first principles’, but once you’re that abstract and that meta it’s unclear what it means for it to change rather than, say, just a change in emphasis relative to something else like ‘going meta’ or ‘justification for values must be even better supported than justification for beliefs’ or ‘arbitrariness is bad’. So it’s not obvious at which level of abstraction I should answer your question.
Like, your beliefs get changed constantly whereas methods only get changed during paradigm shifts. The thing is that once you move that pattern up a few levels of abstraction where your simple belief update is equivalent to another person’s paradigm shift, it gets hard to communicate in a natural way. Like, for the ‘levels of organization’ flavor of levels of abstraction, the difference between “I love Jane more than any other woman and would trade the world for her” and “I love humanity more than other memeplex instantiation and would trade the multiverse for it”. It is hard for those two values to communicate with each other in an intelligible way; if they enter into an economy with each other it’s like they’d be making completely different kinds of deals. kaldkghaslkghldskg communication is difficult and the inferential distance here is way too big.
To be honest I think that though efforts like this post are well-intentioned and thus should be promoted to the extent that they don’t give people an excuse to not notice confusion, Less Wrong really doesn’t have the necessarily set of skills or knowledge to think about morality (ethics, meta-ethics) in a particularly insightful manner. Unfortunately I don’t think this is ever going to change. But maybe five years’ worth of posts like this at many levels of abstraction and drawing on many different sciences and perspectives will lead somewhere? But people won’t even do that. dlakghjadokghaoghaok. Ahem.
Like, there’s a point at which object level uncertainty looks like “should I act as if I am being judged by agents with imperfect knowledge of the context of my decisions or should I act as if I am being judged by an omniscient agent or should I act as if I need to appease both simultaneously or …”; you can go meta here in the abstract to answer this object level moral problem, but one of my many points is that at this point it just looks nothing like ‘is killing good or bad?’ or ‘should I choose for the Nazis kill my son, or my daughter (considering they’ve forced this choice upon me)?’.
I remember that when I was like 11 years old I used to lie awake at night obsessing about variations on Sophie’s choice problems. Those memories are significantly more vivid than my memories of living off ramen and potatoes with no electricity for a few months at around the same age. (I remember thinking that by far the worst part of this was the cold showers, though I still feel negative affect towards ramen (and eggs, which were also cheap).) I feel like that says something about my psychology.
You know, it isn’t actually any more descriptive to write out what ACC and DLPFC stand for, since if people know anything about them they already know their acronyms, but writing them out signals that I know that most people don’t know what ACC and DLPFC stand for and am penitent that I’m not bothering to link to their respective Wikipedia articles. I hate having to constantly jump through such stupid signalling hoops.
I can tell from “anterior cingulate cortext” that you are talking about a part of the brain, even though I haven’t heard of that part before. (I may have been able to tell from context that much about “ACC”, but it would have been more work, and I would have been less confident.)
And compare the Google searches for ACC and anterior cingulate cortext. It is nice to get a more relevant first search result than “Atlantic Coast Conference Official Athletic Site”.
It’s rare that people bother to learn new things on their own whereas it’s common for them to punish people that make it trivially more difficult for them to counterfactually do so, even though they wouldn’t even have been primed to want to do so if that person hadn’t brought up the subject. That’s the thing I’m complaining about. (bla bla marginal cost something bla opportunity costs b;’dja. disclaimer disclaimer disclaimer.)
This might make you feel better: There is a part of every reader that cares about the subjective experience of reading. If you propitiate that part by writing things that are a pleasure to read, they’ll be more likely to read what you say.
Very helpful advice in only two sentences. Appeals to aesthetics are my favorite. Thank you.
FWIW, I have a passing familiarity from long ago with both terms, and none with the acronyms. I would have been mystified if you’d written ACC and probably would not have been able to figure out what you’re talking about, given some quick googling. Though DLPFC could probably have gotten me there after googling that.
It’s certainly easier to look up terms instead of abbreviations, and even moreso years later. People using abbreviations that have since fallen out of use is one of my pet peeves when reading older papers.
Right, but “a passing familiarity from long ago” != “knowing anything about them” in my mind. (Obviously I should have used a less hyperbolic phrase.) OTOH I wasn’t aware of the phenomenon where abbreviations often fall out of use. I think the DLPFC was only carved out of conceptspace in the last decade, both the idea and its abbreviation, which does indicate that the inverse problem might be common for quickly advancing fields like neuroscience. (ETA: So, I was wrong to see this as an example of the thing I was complaining about. (I don’t think I was wrong to complain about that thing but only in a deontological sense; in a consequentialist sense I was wrong there too.))