The Opposite Of Autism

It’s probably not presuming too much to guess that many around here have personal experience with the autism spectrum, if not in relation to themselves, then with close family. I say this because the kinds of subjects discussed around here are exactly the type that would appeal to those of an autistic persuasion, e.g. technical systems, logic, and (arguably) utilitarian philosophy. Many here probably have backgrounds in STEM, and those fields tend to have a significant over-representation of people on the spectrum.

An issue that often comes up in software design (a field with high ASD representation) is programmers not being able to properly model the wants and needs of non-technical end-users. I bring this up because I see AI alignment as being a scaled-up version of this problem. The kind of people who have a strong interest in AI/​machine learning will likely have a greatly disproportional impact on the future of human civilization. This might be a problem as not only is this subset of humans highly atypical in cognitive style, but the very mental architecture which underlies their interest in technical systems restricts their ability to model the minds of typical humans!

The hardest humans for ASD types to model would be those with minds that are the diametric opposite of their own. Call this condition anti-autism. It would consist of...well I’m not exactly sure. It’s hard for me to imagine the mental lives of these people. I’ve heard the phrase “people vs things” thrown around, implying that ASD types are drawn to inanimate objects, and humans who are on the opposite side of this condition would be drawn to people. I’m not so sure. I think that plenty of people with ASD have an obsessive interest in categorizing humans and other living things.

While there’s been a great amount of study behind autism, there’s a curious lack of interest in what a condition with its exact opposite traits looks like, or even the fact of its existence. Simon Baron-Cohen, one the most famous autism researchers and creator of the systemizer-emphathizer scale [0], argued that it doesn’t come up because it’s not actually a problem. Basically, a human who is system-blind but extremely skilled at reading other humans (hyper-empathy, in his terms) can get by perfectly well [1]:

Scientists have never got up close to these individuals. It is a bit like positing the existence of a new animal on theoretical grounds, and then setting out to discover if it is really found in nature.

[W]hat would such people look like? Their empathizing ability would be average or significantly better than that of other people in the general population, but their systemizing would be impaired. So these would be people who have difficulty understanding math or physics or machines or chemistry, as systems. But they could be extremely accurate at tuning in to others’ feelings and thoughts.

Would such a profile carry any necessary disability? Hyperempathizing could be a great asset, and poor systemizing may not be too crippling.

Fortunately, in our society there is considerable tolerance for such individuals. For example, if you were a child who was systemblind, your teachers might simply allow you to drop mathematics and science at the earliest possible stage, and encourage you to pursue your stronger subjects.

If you were a systemblind adult and your car didn’t work, you could just call the mechanic (who is likely to be at least a Type S). If your computer needs putting together, and you can’t work out which lead goes into which socket, there are phone numbers that you can ring for technical support. And in evolutionary terms, there were likely equivalent people that a systemblind person could turn to for help when that person’s home was destroyed in strong winds, or when their spear broke.

Baron-Cohen dismisses paranoia as being anti-autism because such people don’t infer the mental states of other humans, but rather create a fictional account of them:

If someone is over-attributing intentions, or has become preoccupied by their own emotions, then by definition they are not exhibiting hyperempathy. Hyperempathy is the ability to ascertain the mental states of others to an unusually accurate and sensitive degree, and it can only occur if one is appropriately tuned in to the other person’s feelings. A paranoid person, or someone who is easily inflamed into aggression by suspecting that others are hostile, has a problem. But their problem is not hyperempathy.

So again, it’s not simple to guess what people with “the opposite of autism” are like, as they’re generally not available for clinical study.

I think investigating this would be of interest to people working in AI alignment and whose ultimate goal is improving the condition of humanity in general. Understanding the needs and wants of the subset of humans most unlike themselves would likely help in modeling the desires of the typical person.

As an aside, Baron-Cohen’s sytemizers vs empathizers framework reminded me a lot of Asimov’s Foundation books [2]. The First Foundation, with it’s technicians and natural scientists, and the Second Foundation, with its psychologists, who ultimately needed each other to survive.


[0] https://​​www.ncbi.nlm.nih.gov/​​pmc/​​articles/​​PMC1693117/​​

[1] Baron-Cohen, Simon, The Essential Difference (2003)

[2] https://​​en.wikipedia.org/​​wiki/​​Foundation_series