I don’t feel qualified to comment on the relationship between LW and autism (I am, however, neurotypical). Rationality qua science of winning at life seems like it might have some tangential relevance, but only insofar as it could formalize certain aspects of living that those on the autistic spectrum might have trouble with in their native form; I don’t see a causal arrow pointing the other way. We could probably expect neurotypicals to have an easier time with social rationality relative to, say, financial, but that doesn’t seem like too much of a surprise.
On the other hand, if you’re looking for an anecdotal and wholly unscientific outside view, a friend of mine once compared reading the “Torture vs. Dust Specks” thread to watching a batch of homunculi try to sort out the finer points of human interaction. Which carries a certain irony given our relationship with machine ethics, now that I think about it.
a friend of mine once compared reading the “Torture vs. Dust Specks” thread to watching a batch of homunculi try to sort out the finer points of human interaction.
This one feels similar. Some excellent comments, but …
That’s an interesting observation. Similarly, I am currently working on formalizing some aspects of social interaction for AI, and there’s an obvious connection to explicitly developing social skills in those who don’t develop them by traditional means.
I don’t feel qualified to comment on the relationship between LW and autism (I am, however, neurotypical). Rationality qua science of winning at life seems like it might have some tangential relevance, but only insofar as it could formalize certain aspects of living that those on the autistic spectrum might have trouble with in their native form; I don’t see a causal arrow pointing the other way. We could probably expect neurotypicals to have an easier time with social rationality relative to, say, financial, but that doesn’t seem like too much of a surprise.
On the other hand, if you’re looking for an anecdotal and wholly unscientific outside view, a friend of mine once compared reading the “Torture vs. Dust Specks” thread to watching a batch of homunculi try to sort out the finer points of human interaction. Which carries a certain irony given our relationship with machine ethics, now that I think about it.
This one feels similar. Some excellent comments, but …
That’s an interesting observation. Similarly, I am currently working on formalizing some aspects of social interaction for AI, and there’s an obvious connection to explicitly developing social skills in those who don’t develop them by traditional means.