B.A. in Philosophy by University of Sao Paulo (USP), Brazil, and technical analyst at a Brazilian railway lab.
alexgieg
Knowing truth doesn’t provide, by itself, human connection. In the Mormon church you had a community, people with whom you interacted and had a common ground, shared interests, and collective goals. When one breaks with such a community, without having first established a new one, the result may be extreme loneliness.
The way to fix that is to find a new community. Many atheists and rationalists schedule periodic meetings to interact with each other and talk in person, so depending on your need of connection that might suffice. If not, there are church-like organizations that require no profession of faith and welcome atheists, which is particularly effective if one’s been raised with church attendance and miss that. In the US Unitarian Universalism is one of the oldest movements along those lines, with the form of Protestant Christianity minus the belief system, but there are others. This CBS article lists several: Inside the “secular churches” that fill a need for some nonreligious Americans.
If you’re not particularly attached with atheism itself, you also have the option of exploring personal religiosity and communities that go along with those, which basically means constructing your own religion from your own experiences, which can be induced through mean ranging from meditation and self-suggestion all the way to psychedelic trips. Doing that while remaining 99% a rationalist isn’t particularly difficult, the cost being embracing compartmentalization. But then, if that’s what it takes for one to find enough meaning in the world that they want to continue on it, I’d say it’s a price well worth paying. It’s what I myself do, and it hasn’t caused me any major problem, my take simply being that, if what I perceive is true, science will eventually catch-up, and if it isn’t, as long as I’m not trying to assert it above the perfectly legitimate skepticism of others, then shrugs.
So, my suggestion, in order, would be: meet other atheists and rationalists in real life with some regularity; if that isn’t enough, try a church-like atheist/agnostic/agnostic-friendly community; and if that still isn’t enough, do your own thing with others doing similarly.
Indeed. I imagine it’d have to happen in four steps:
-
As you say, investigate each cognitive function independently. They won’t show the kind of independency psychometrics prefers, since there are overlaps between the different functions, but it’d be a good start.
-
If that one proves robust, then investigate the axis between the introverted and extraverted modes of the four basic types. My hunch is these four axes would take the form of four bimodal distributions.
-
Then, if that one also proves robust, investigate the existence and distribution of stable stacks. There are 40,320 possible stacks considering all permutations of all eight functions. My hunch is we’d find a very long-tailed normal distribution, with a small number of common stacks in the ±98% range. Maybe those are the MBTI 16, maybe not.
-
And then, finally, if the “stacks exist” hypothesis proves valid, study them over long periods of time to observe whether they change, and how.
-
It then scores the answers across 4 axes
I’ve read about the MBTI for a while. Not in extreme depth, but also not via the simplifications provided by corporate heads. In depth enough to understand the basics of Jungian psychology on which the MBTI is based, though. So what I will say is likely going to differ significantly from what you learned in this course.
So, the most important thing is, the (real) MBTI four letters do not represent extremes on four different axes. That they do is one such simplification.
The core of the Jungian hypothesis on personality is that there are eight distinct cognitive functions, that is, eight basic ways the mind processes and organizes external and internal information.
These eight cognitive functions form two opposite pairs: Sensing vs Intuition, and Thinking vs Feeling, any of which may operate in either an Extraverted or an Introverted mode. Notice that it isn’t that Introversion and Extroversion form an axis, but rather that, say, “Introverted Thinking” and “Extroverted Thinking” form two very distinct modes of Thinking, to the point they cannot be considered the same cognitive process at all.
Jung considered every person to have all eight cognitive functions operating in them, but at very different weights, with a dominant one. In his system, I’d be someone who’s using Introverted Thinking as my default cognitive function almost 24⁄7, only varying this when needed under specific circumstances. So, for him, there were eight personality types, depending on which cognitive function is dominant for every person.
Myers and Briggs studied his works on the topic, and thought it was incomplete. They hypothesized that specifying a single cognitive function as dominant wasn’t enough to properly describe how the person functions. In their view, it was also necessary to take into account the cognitive function used secondarily. In my case, the secondary function I use the most is Extraverted Intuition.
Hence, for Myers and Briggs, my personality is defined as being primarily an Introverted Thinker, who uses Extraverted Intuition to fill the gaps where Introverted Thinking doesn’t cut it. And that’s it.
What are the four letters then?
They’re a needlessly convoluted way to say the exact same thing.
In the MBTI system, the two letters in the middle inform what my two main cognitive functions are. Since I use Intuition and Thinking, they’re “NT”. But that doesn’t say which of these is my main cognitive function and which is the secondary, nor which is Introverted or which is Extroverted. That’s what the other two letters say. The “I” at the beginning informs that my main function, whether it’s the Thinking or the Intuition, is of the Introverted type. And the final letter finally informs whether that “I” applies to the “N” or to the “T”. In my case, the fourth letter would be “P”, meaning my main function is the “T” one, which thus is the one the “I” affects.
Yes, that’s completely nuts. It’d be much, much easier to use something like “IT/EN”.
And this brings another aspect of their system. They consider that the main and secondary cognitive functions always have opposite “-version”. Hence, by specifying that my main type, Thinking, is of the Introverted type, that automatically assumes the secondary one, Intuition, is Extraverted.
There are a few more details. Basically, the third and fourth most used cognitive functions come from the determination of the first two. In my case, my third and fourth most used cognitive functions would be, respectively, Introverted Sensing (opposite to the second), and Extraverted Feeling (opposite to the first). And the other four would fall behind at positions fifth to eighth. The full set is my so-called “cognitive stack”.
TL;DR then: the four letters are not axes, they’re a very, very confusing way to say that, from the eight cognitive functions Jung identified, I hierarchize them following this specific sequence of priorities. By default, most of the time, I use this one, and then the others with lower and lower priority, following that sequence. There are (presumably) 16 standard stacks, and maybe several non-standard pathological ones. And all that the four MBTI letters inform is which of the 16 cognitive stacks applies in my case.
This, fundamentally, is the reason why the MBTI doesn’t correlate well, or at all, with the Big Five: the MBTI has no axes in a traditional psychometric sense. It’s an ordinal hierarchy of preferred cognitive processes, not a cardinal set of values or a standard distribution.
And the easiest way, by far, to identify one’s MBTI is to simply read the detailed descriptions of the eight cognitive functions. One of them almost always pops up as “yeah, that’s how I think most of the time”, with another popping up as “yeah, I also use this one a lot, not as much as that one, but still a lot”, the other six being stuff one clearly rarely uses.
Now, is any of this scientific? I don’t know. I have read many attempts at determining this, but all of them assume the four letters represent four axes that can then be psychometrically evaluated, which absolutely has nothing to do with what Jung was talking about, and I’m not aware of any psychological study about the validity, or lack thereof, of his hypothesis about the eight cognitive functions themselves (maybe there are?), much less, assuming they’re valid, of Myers and Briggs specific assertion they almost always come in 16 stacks (maybe they do, maybe they don’t, maybe they vary over time, etc.).
For my own anecdotal case, I find Introverted Thinking coupled with Extraverted Intuition, as described by Jung, covers a lot of how I function. Not everything by far, but a lot. So it’s useful. More than that, I cannot really say.
Hope this helps!
EDIT: Correction on my third and fourth functions and other minor clarifications.
I’d say this is the point at which one starts looking into current state-of-the-art psychology (and some non-scientific takes too) to begin understanding all the variability in human behavior and cognition, and which kinds of advantages and disadvantages each provides from different perspectives, from the individual, to the sociological, to the evolutive.
Much of that disappointment is solved by that. Some of it deepens. The overall effect is a net positive though.
Unfortunately, they aren’t rational. I developed this theme a little bit more in another reply, but to put it simply, in the US GAI is being pursued by insane individuals. No rational argument can stop someone who believes in that. And the other sides will try to protect themselves from these.
Admittedly, nuclear weapons are not a perfect analog for AI due to many reasons, but I think it is a reasonable analog.
We’ve had extreme luck when it comes to nuclear weapons. We not only had several close calls that were deescalated by particularly noble individuals doing the right thing, but also, back when the URSS had barely developed theirs and the US alone had a whole stockpile of warheads, we had the good luck of its leadership also being somewhat moral and refusing to turn nukes into a regular weapon, which was followed by MAD forcing everyone to kind of stay so even when the other side asked nicely whether they could bomb a third party. Weren’t for that long sequence of good luck after good luck, and we’d now be living in an annihilated world, or at the very least a post-apocalyptic one.
With this in mind, I wanted to ask out of curiosity, what % risk do you think there needs to be for annihilation to occur?
I have no idea, really. All I can infer is that it’s unlikely any major power will stop trying to achieve GAI unless:
a) Either a massively severe accident due to misaligned not-quite-GAI-yet happens that by its sheer, absolute horror puts the Fear-Of-God in our civilian and military leaders for a few generations;
b) Or a long sequence of reasonably severe accidents happens, each new one worse than the last, with AI companies repeatedly and consistently failing at fixing the underlying cause, this in turn making military leaders deeply wary of deploying advanced AI systems, and civilian leaders enacting restrictions on what AI is allowed to touch.
Absent either of those, I doubt the pursuit of GAI will stop no matter what X-risk analysts say. Or at least, I myself cannot imagine any kind of argument that’d convince, say, the CPC to stop their research when those on the other side spearheading theirs are massively powerful nutjobs? And then, what argument could be provided that’d stop someone who believes in this? So, neither will stop, which means GAI will happen. And then we’ll need to count on luck again, this time with:
i) Either GAI going FOOM as Yudkowsky believes, but for some reason continuing to like humans enough not to turn us into computronium;
ii) Or Hanson being right and FOOM not happening, followed by:
ii.1) Either things being slow enough to “merely” lead to a or b, above;
ii.2) Or things being so immensely slow we can actually fix them.
I have no opinion on whether FOOM is or isn’t likely. I’ve read the entire discussion and all I know is both sets of arguments sound reasonable to me.
I’m assuming that—and please correct me if I’m misinterpreting here—“extinguish” here means something along the lines of, “remove the ability to compete effectively for resources (e.g. customers or other planets)” not “literally annihilate”.
I wish that were the case, but my reference is imagining a paranoid M.A.D. mentality coupled with a Total War scenario unbounded by moral constraints, that is, all sides thinking all the other sides are X-risks to them.
In practice things tend not to get that bad most of the time, but sometimes they do, and much of military preparation concern mitigation of these perceived X-risks, the idea being that if “our side” becomes so powerful it can in fact annihilate the others, and in consequence the others submit without resisting, then “our side” may be magnanimous towards them conditional on their continued subservience and submission, but if they resist to the point of becoming an X-risk towards us, then removing them from the equation entirely is the safest defense from the X-risk they pose us.
A global consensus on stopping GAI development due to its X-risk for all life passes through a global consensus, by all sides, that none of the other sides is an X-risk to any of side. Once everyone agrees on this, then they all together agreeing to deal with a global X-risk becomes feasible. Before that, only if they all see that global X-risk as more urgent and immediate than the many local-to-them X-risks.
Unfortunately, those in positions of power won’t listen. From their perspective it’s simply absurd to suggest that a system that currently directly causes, at most, a few dozen induced suicide deaths per year, may explode into death of all life. They have no instinctive, gut feeling for exponential growth, so it doesn’t exist for them. And even if they acknowledge there’s a risk, their practical reasoning moves more along arms-race lines:
“If we stop and don’t develop AGI before our geopolitical enemies because we’re afraid of a tiny risk of an extinction, they will develop it regardless, then one of two things happen: either global extinction, or our extinction in our enemies’ hands. Which is why we must develop it first. If it goes well, we extinguish them before they have a chance to do it to us. If it goes bad, it’d have gone bad anyway in their or our hands, so that case doesn’t matter.”
Which is to say they won’t care until they see thousands or millions of people dying due to rogue GAIs. Then, and only then, they’d start thinking in terms of maybe starting talks about perchance organizing an international meeting to perhaps agree on potential safeguards that might start being implemented after the proper committees are organized and the adequate personal selected to begin defining...
But obviously, factory farm animals feel more pain than crickets. The question is just how much pain?
This paper is far from a complete answer, but it may help:
Sneddon, Lynne U., Robert W. Elwood, Shelley A. Adamo, and Matthew C. Leach. “Defining and Assessing Animal Pain.” Animal Behaviour 97 (2014): 201–12. https://doi.org/10.1016/j.anbehav.2014.09.007. Open access: https://www.wellbeingintlstudiesrepository.org/acwp_arte/69/.
This isn’t a dichotomy. We can farm animals while making their lives reasonably comfortable. Their moments of pain would be few up to and until they reach the age for slaughter, which itself can be made stress-free and painless.
Here in Brazil, for example, we have huge ranches where cattle move around freely. Cramping them all in a tiny area to maximize productivity at the cost of making their lives extremely uncomfortable, as in the US factory farm system, may happen here, but I’m not personally aware of it so unusual that is. The US could do it the same way, as it isn’t like the country lacks territory where cattle could roam freely, but since this isn’t required by law, and factory farming is more profitable, this is rare, with the end result of free-roaming meat being sold at a much higher premium than it should.
Brazilian chickens, on the other hand, are typically cramped together the same as in the US, unless one opts to buy eggs from small family-owned farms, who mostly let them roam freely.
A few remarks that don’t add up to either agreement or disagreement with any point here:
Considering rivers conscious hasn’t been a difficulty for humans, as animism is a baseline impulse that develops even in absence of theism, and it takes effort, at either the individual or cultural levels, for people to learn not to anthropomorphize the world. As such, I’d suggest a thought experiment that allows for the possibility of a conscious river, even if composed of atomic moments of consciousness arising from strange flows through an extremely complex network of pipes, taps back, into that underlying animistic impulse, and so will only seem weird to those who’ve previously managed to supress it either via effort or nurture.
Conversely, as one can learn to suppress their animistic impulse towards the world, one can also suppress their animistic impulse towards themselves. Buddhism is the paradigmatic example of that effort. Most Buddhist schools of thought deny the reality of any kind of permanent self, asserting the perception of an “I” emerges from atomistic moments as an effect of those interactions, not as their cause or as a parallel process to them. From this perspective we may have a “non-conscious in itself” river whose pipe flows, interrupted or otherwise, cause the emergence of consciousness, exactly the same and in no way differently from what human minds do.
But even those Buddhist schools that do admit of a “something extra” at the root of the experience of consciousness, consider it as a form of matter that binds to ordinary matter to, operating as a single organic mixture, give rise to those moments of consciousness. This might correspond, or be an analogous on some level, to Searle’s symbols, at least going from the summarized view presented in this post. Now, irrespective of such symbols being or not reducible to ordinary matter, if they can “attach” to human brain’s matter to form, er, “carbon-based neuro-symbolic aggregates”, nothing in principle (that I can imagine, at least) prevents them from attaching to any other substrate, such a water pipes, at which point we’d have “water-based pipe-symbolic” ones. Such an aggregate might develop a mind of its own, and even a human-like mind, complete with a self-delusion that similarly believes that emergent self as essential.
As such, it’d seem to me that, without a fully developed “physics of symbols”, such speculations may go either way and don’t really help solve the issue. A full treatment of the topic would need to expand on all such possibilities, and then analyse them from perspectives such as the ones above, before properly contrasting them.
Where is all the furry AI porn you’d expect to be generated with PonyDiffusion, anyway?
From my experience, it’s on Telegram groups (maybe Discord ones too, but I don’t use it myself). There are furries who love to generate hundreds of images around a certain theme, typically on their own desktop computers where they have full control and can tweak parameters until they get what they wanted exactly right. They share the best ones, sometimes with the recipes. People comment, and quickly move on.
At the same time, when someone gets something with meaning attached, such as a drawing they commissioned from an artist they like, or that someone gifted them, it has more weight both for themselves, as well as friends who share on their emotional attachment to it.
I guess the difference is similar to that many (a few? most?) notice between a handcrafted vs an industrialized good: even if the industrialized one is better by objetive parameters, the handcrafted one is perceived as qualitatively distinct. So I can imagine a scenario in which there are automated, generative websites for quick consumption—especially video, as you mentioned—and Etsy-like made-by-a-real-person premium ones, with most of the associated social status geared towards the later.
A smart group of furry advertisers would look at this situation and see a commoditize-your-complement play: if you can break the censorship and everyone switches to the preferred equilibrium of AI art, that frees up a ton of money.
I don’t know about sexual toys specifically, but something like that has been attempted with fursuits. There are cheap, knockoff Chinese fursuit sellers on sites such as Alibaba, and there’s a market for those somewhere otherwise those wouldn’t be advertised, but I’ve never seen someone wearing one of those on either big cons or small local meetups I attended, nor have I heard of someone who does. As with handcrafted art, it seems furries prefer handcrafted fursuits made either by the user themselves, or by artisan fursuit makers.
I suppose that might all change if the fandom grows to the point of becoming fully mainstream. If at some point there are tens to hundreds of millions of furries, most of whom carrying furry-related fetishes (sexual or otherwise), real industries might form around us to the point of breaking through the traditional handcraft focus. But I confess I have difficulty even visualizing such a scenario.
Hmm… maybe a good source for potential analogies would be Renaissance Fairs scene. I don’t know much about them, but they’re (as far as I can gather) more mainstream than the Furry Fandom. Do you know if such commoditization happens there? That might be a good model for what’s likely to happen with the Furry Fandom as it further mainstreams.
This probably doesn’t generalize beyond very niche subcultures, but in the one I’m a member of, the Furry Fandom, art drawn by real artists is such a core aspect that, even though furries use generative AI for fun, we don’t value it. One reason behind this is that, different from more typical fandoms, in which members are fans of something specific made by a 3rd party, in the Furry Fandom members are fans of each other.
Give that, and assuming the Furry Fandom continues existing in the future, I expect members will continue commissioning art from each other or, at the very least, will continue wanting to be able to commission art from each other, and will use AI-generated art as a temporary stand in while they save to commission real pieces from the actual artists they admire.
I’d like to provide a qualitative counterpoint.
Aren’t these arguments valid for almost all welfare programs provided by a first-world country to anyone but the base of the social pyramid? For one example, let’s take retirement. All the tax money that goes into paying retirees to do nothing would be much better spent by helping victims of malaria etc. in 3rd world countries. If they weren’t responsible enough to save during their working years to be able to live without working for the last 10 to 30 years of their lives, especially those from the lower middle class and above, or to have had 10 kids who would sustain them in their late years, each with 10% of their income, that increases the burden on society etc. And thus similarly for other programs targeting the middle class. So why not redirect most or even all of this to those more in need?
A possible answer, covering the specific case you brought as well as the generalized version above, counterintuitive as it may be, is that the original intent of welfare seems to have been forgotten nowadays, which makes it worth bringing it back.
Welfare wasn’t originally implemented due to charitable impulses of those in power. Rather, it was first implemented to increase worker productivity, as in the programs pioneered by Bismarck in the 19th century. After that, it went on being implemented to reduce the working class’s drive to become revolutionaries, as Marx noticed would happen in his Critique of the Gotha Program, which is why he opposed such programs. And in fact, wherever extensive welfare programs were instituted early empirical observations showed they did in fact reduce the revolutionary impulse.
Add to that the well observed fact mass revolutions over the last century and half, both left- and right-wing alike, have been strongly driven by dispossessed but well-educated, and thus entitled, young adults whose social and economic status were below their perceived self-worth, and we have the recipe for why providing welfare directed at those who traditionally form a revolutionary vanguard so they don’t become a vanguard may be a reasonable long-term strategy, supposing we consider such movements, and what they result in, a net negative.
Hence the baseline question, as I see it, isn’t as much in regard to the raw economics of the issue, but on how likely a revolution in the US due to the worsening economic conditions of its young middle class versus the changing shape of the US age pyramid is, and, based on a cost-benefit analysis, how much a revolution not happening in the US over the next generation or two is worth in monetary terms. Is a US revolution strictly impossible? If it’s possible, is its likelihood high enough that reducing that likelihood is worth $1 trillion?
The same goes for all welfare aimed at this socio-economic/age-bracket group.
EDIT: Typo and punctuation corrections, and minor clarifications.
When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.
I don’t think this is accurate, it depends more on how it’s presented.
In my experience, if someone posts something that’s controversial to the general LW consensus, but argues carefully and in details, addressing the likely conflicts and recognizing where their position differs from the consensus, how, why, etc., in short, if they do the hard work of properly presenting it, it’s well received. It may earn an agreement downvote, which is natural and expected, but it also earns a karma upvote for the effort put into exposing the point, plus those who disagreed engaging with the person explaining their points of disagreement.
Your point would be valid on most online forums, as people who aren’t as careful about arguments as LWers tend to conflate disliking with disagreeing, which results in a downvote is a downvote is a downvote. Most LWers, in contrast, tend to be well skilled at treating the two axes as orthogonal, and it shows.
The answer is threefold.
a) First, religious and spiritual perspectives are a primarily a perceptual experience, not a set of beliefs. For those who have this perception, the object of which is technically named “the numinous”, it is self-evident. The numinous stuff clearly “is there”, for anyone to see/feel/notice/perceive/experience/etc., and they cannot quite grasp the concept of someone saying they notice nothing.
Here are two analogies of how this works.
For people with numinal perception, hearing “it’s pretty, but that’s all” is somewhat similar to someone with perfect vision hearing from a born blind person they don’t see anything. The person with vision can only imagine “not seeing” as “seeing a black background”, similar to what they perceive when they close their eyes or are in a perfectly dark room. Not seeing isn’t seeing black, it’s not seeing.
Consider, for another analogy, that a dove with normally functioning magnetic field sensing were able to talk, and it asked you: “So, if you don’t feel North, which direction do you feel?” You’d reply “none”, and the dove would at most be able to imagine you feel something like up or down, because they cannot grasp what it is like not to physically feel cardinal directions.
The opposite also applies. People with no numinous perception at all are baffled by those with it describing they perceive something that quite evidently isn’t there. Their immediate take is that the person is self-deluded, or maybe suffering from some perceptual issue, maybe even schizophrenic, if not outright lying. At their most charitable, they’ll attribute this perceptual error to a form of synesthesia.
Unsurprisingly, it’s much more likely to be a Theist or similar if one has numinous perception, and much easier to be an Atheist if one doesn’t have it, though there are exceptions. I don’t remember if it was Carl Sagan or Isaac Asimov, but I recall one of them explaining in an interview they did have this perception of a “something” there (I don’t think they referred to it by its name), and were thus constantly tempted towards becoming religious, but kept fighting against that impulse due to knowing it as a mental trick.
b) Thus, if we establish numinal perception is a thing, it becomes easy to understand what religions and spiritual beliefs are. Supernatural belief system are attempts, some tentative and in broad strokes, others quite systematic, to account for these perceptions, starting from the premise they’re perceptions of objective phenomena, not of merely subjective, mental constructs.
Interestingly, in my experience talking with people with this perception, what’s perceived as numinal varies from one to the other, which likely account for religious preferences when one has a choice.
For example, for some the navy of a Catholic cathedral is shock full of the numinal, while a crystal clear waterfall in a forest is just pretty but not numinal at all. Those with this kind of numinal perception are more likely to be Christian.
For others, it’s the reverse. Those are more likely to go for some religion more focused on nature things, some form of native religiosity, unstructured spirituality, animism or the like.
For others yet, they feel the numinal in both contexts. These will be all in with syncretisms, complex ontological takes, and the like.
c) Finally, on whether perceived numinous thingies are objectively real or not depends on one’s philosophical assumptions.
If one’s on the side of reductionism, then they’re clearly some kind of mental epiphenomena either advantageous or at least not-disadvantegeous for survival, so it keeps being expressed.
If one’s an antireductionist, they can say numinous thingies are quite real, but made of pure qualia, without any measurable counterpart to make it numerically apprehensible, so either one has the sensory apparatus to perceive them, or they don’t, external devices won’t help.
And the main issue here is the choice for either reductionism or antireductionism is axiomatic. One either prefers one, and goes with it, or prefers the other, and goes with it. There’s no extrinsic way to decide, only opposite arguments that tend to cancel out.
In conclusion:
To more directly answer the question then, when someone says they believe in God, what they mean is they perceive a certain numinal thing-y, and that the most accurate way to describe that numinal thing-y is with the word “God”, plus the entire set of concepts that come with it in the belief system they’re attuned with.
If they abandoned this specific explanatory system, that wouldn’t affect their numinal perception qua perception, so they’d likely either go with another explanation they felt covered their perception even better, or more rarely actively force themselves to resist accepting the reality of that perception. The perception itself would remain there, calling for their attention.
I mean sure if you take self-reports as the absolute truth (...)
Absolute truth doesn’t exist, the range is always ]0;1[. 0 and 1 require infinitely strong evidence. What imprecisions in self-reporting do generate is higher variance, skewing, bias etc., and these can be solved by better causal hypotheses. However, those causal hypotheses must be predictive and falsifiable.
why go with the convoluted point about aro-ace trans women (...)
Because that’s central to the falsifiability requirement. Consider: if transgender individuals explicitly telling researchers they never experienced autogynephilic impulses, nor any sexual impulse or attraction at all, is dismissed by the autogynephilic hypothesis proponents and considered invalid, with proponents suggesting they actually did experience it but {ad hoc rationalization follows}, then what is the autogynephilic hypothesis’ falsifiability criteria? Is there any?
More studies != better integration of the information from those studies into a coherent explanation.
There are several moments in research.
The initial hypothesis is simple: there are identifiable physiological differences between human male and female brains, and transgender individuals’ brains show distinctive traits typical of the brains of the other sex, while cisgender individuals don’t.
This is testable, with clear falsifiability criteria, and provides a pathway for the development of a taxonomy of such differences, including typical values, typical variances, normal distributions for each sex, a full binomial distribution to cover both sexes, and the ability to position an individual’s brain somewhere along that binomial distribution.
Following that taxonomic mapping, if it pans out, there come questions of causality, such as what causes some individual brains to fall so distantly from the average for their birth sex. But that’s a further development way down the line. Right now what matters is the first stage is falsifiable and has been experiencing constant corroboration, not constant falsification.
So now it’s a matter of contrasting this theory’s falsifiability track record with the autogynephilic hypothesis’s falsifiability track record—supposing there’s one.
Feels like an example of bad discourse that you dismiss it on the basis of ace trans women without responding to what Blanchardians have to say about ace trans women.
Thanks for the link, but I’d say the text actually confirms my point rather than contradicting it. The numbers referred to:
“In this study, Blanchard (...) found that 75% of his asexual group answered yes. Similarly, Nuttbrock found that 67% of his asexual group had experienced transvestic arousal at some point in their lives. (...) 45.2% of the asexuals feel that it applies at least a little bit to them (...)”
Can all be reversed to show that, respectively, 25% / 33% / 54.8% of aro-ace trans individuals answer in the negative, and the rebuttal of the universality of the hypothesis needs only these numbers to be non-zero. That they’re this high comes as an added bonus, so to speak.
I would enjoy if someone could lay it out in a more comprehensible manner.
This is being constantly done. Over the last 20+ years, as neuroimaging and autopsy techniques advance, and new studies are done using those more advanced techniques, we mostly get corroborations with more precision, not falsifications. There are occasional null results, so that isn’t strictly always the case, but those come as outliers, not forming a new, contrary body of evidence, and not significantly affecting the trend identified as meta-analyses keep being done.
I’m not aware of someone having done a formal Bayesian calculation on this, but my impression is it’d show the scale constantly sliding toward the physiological hypothesis, and away from the autogynephilic one, as time advances, with only small backslides along the way.
Yep, the idea autogynephilia explains transgender identities can be shown to be false by referring a single piece of direct evidence: it isn’t difficult to find aro-ace trans people. That right there shows autogynephilia isn’t a universal explanation. It may apply to some cases, maybe, but transgender identities definitely go way beyond that.
Besides, but also mainly, we have evidence for physiological causes:
Frigerio, Alberto, Lucia Ballerini, and Maria Valdés Hernández. “Structural, Functional, and Metabolic Brain Differences as a Function of Gender Identity or Sexual Orientation: A Systematic Review of the Human Neuroimaging Literature.” Archives of Sexual Behavior 50, no. 8 (November 2021): 3329–52. https://doi.org/10.1007/s10508-021-02005-9.
And it takes lots of handwaving, or deliberately ignoring the data, to stick with the autogynephilic hypothesis as the most general explanation.
There’s a potential middle-way there.
I don’t know much about Mormonism, mind, but I watch and read a Biblical scholar, Dan McClellan, who’s skeptical of everything and then some. His YouTube channel, and other videos in which he appears, as well as his papers and books, are all in line with the academic consensus in Biblical scholarship, meaning he deconstructs every single Christian belief (and most Jewish ones too) to the point it’s easy to assume he’s a militant Atheist. But he’s actually a practicing Mormon, and his intense criticism extends to the books of the Mormon canon.
Contacting him might thus help. if someone like him can be an active member of the LDS church, even if it’s in some kind of minority movement, you might find a way to similarly keep both things going.
That’s what I myself follow, mixed with some Taoism and Shinto. It’s a combination that works well for me.
If you go with Buddhism, it’ll help to familiarize yourself with the concept of Apatheism, which is distinct from Theism, Atheism and Agnosticism, and see if you’d be comfortable adopting it, since that’s the Buddhist take on things.
To summarize: Theisms care about deities and affirm their existence. Atheism cares about deities and affirms they don’t exist. Agnosticism cares about deities and wishes it knew if one or the other take. As such, all three fall under the umbrella term Patheism: caring about the existence or inexistence of deities.
Apatheism is strict indifference towards deities. There are Apatheists who think deities exist, but if they don’t, nothing of import was lost. There are those who think they don’t exist, but if perchance they do, they still don’t matter much, or at all (though in that case it’s advisable to try and teach them Buddhism too, so they become better deities). And there are those who don’t know, and really don’t care. So Apatheism has equivalents to Theism, Atheism and Agnosticism, but their Apatheistic counterparts are so weakly distinct it barely registers.
Buddhism is the Apatheistic religion par excellence, so adopting that rather than Atheism or Agnosticism makes it much easier to understand its philosophy and to put it into practice.