Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com
So yes, I agree that intolerance can also be contagious—and it’s sort of a quantitative question of which one outweighs the other. I don’t personally believe in “evil” (as you sort of hint there, I believe that if we are sufficiently eager to understand, we can always find common humanity with anyone) - but all kinds of neurodivergences, such as biological lack of empathy, do exist, and while we need not stigmatize them, they may be socially disruptive (like torching a city). Again, the question of whether our absolutely tolerant society can be stable in face of psychopaths torching cities once in a while I think is a quantitative one.
But what I’m excited about here is that in the case that those quantities are sufficient (tolerance is sufficiently contagious, psychopaths are sufficiently rare, etc), then we could have an absolutely tolerant society—even in that pacifist way you don’t quite like. And that possibility in itself I find exciting. And that possibility is something that I think Popper did not see.
While these are relevant elaborations on the paradox of tolerance, I’d also be curious to hear your opinion on the proposal I’m making here—could tolerance be contagious, without any intentional action to make it so (violent or otherwise)? If so, could that make the existence of an absolutely tolerant society conceivable?
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly—but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context—and in that case, you may be quite right.
But here I’m trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don’t actually know what our goals are or where those goals came from. So I guess here I’m thinking of people more as dynamical systems.
Against the paradox of tolerance
For what it’s worth, let me just reply to your specific concern here: I think the value of anthropomorphization I tried to explain is somehow independent of whether we expect God to intervene or not. If you are saying that this “expectation” may be an undesirable side-effect, then that may be so for some people, but that does not directly contradict my argument. What do you think?
And the word was “God”
just updated the post to add this clarification about “too perfect”—thanks for your question!
I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I’m not sure I agree that if we can fully understand the algorithm, then we won’t view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I’d say agency isn’t a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps?
As for other examples of “doing something so well we get a strange feeling,” the chess example wouldn’t be my go-to, since the action space there is somehow “small” since it is discrete and finite. I’m more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer—that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see ‘double tracking’) - the slight differences make the sound “bigger” and “more alive.” Etc, etc.
Does this make sense?
ah, yes! good point—so something like the presence of “unseen causes”?
The other hypothesis the lab I worked with looked into was the presence of some ‘internally generated forces’ - sort of like an ‘unmoved mover’ - which feels similar to what you’re suggesting?
In some way, this feels not really more general than “mistakes,” but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency
Mistakes as agency
yeah, I thought so too—but I only had very preliminary results, not enough for a publication… but perhaps I could write up a post based on what I had
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
A physicist’s approach to Origins of Life
I really appreciate your care in having a supportive tone here—it is a bit heart aching to read some of the more directly critical comments.
great point about the non-consentual nature of Ea’s actions—it does create a dark undertone to the story, and needs either correcting, or expanding (perhaps framing it as the source of the “shadow of sexuality”—so we might also remember the risks)
the heteronormative line I did notice, and I think could generalize straightforwardly—this was just the simplest place to start. I love your suggestion of “”sex” as acting on a body specifically to produce pleasure in that body.”
And yes, there are definitely many many aspects of sex that can then be addressed within this lore—like rape, consent, STD, procreation, sublimation, psychological impacts, gender, family, etc. Taking the Freudian approach, we could really frame all aspects of human life within this context—could be a fun exercise.
I guess the key hypothesis I’m suggesting here is that explaining the many varied aspects of sexuality in terms of a deity could help to clarify all its complexity—just as the pantheon of gods helped early pagan cultures make sense of the world and make some successful predictions / inventions. It could be nicer to have a science-like explanation, but people would have a harder time keeping that straight (and I believe we don’t yet have enough consensus in psychology as a science anyway).
yeah I don’t know how cultural myths like Santa form or where they start—now they are grounded in rituals, but I haven’t looked at how they were popularized in the first place.
hmm, with all this feedback I’m wondering if my framing of this story as “sex-ed to smooth out the impact of puberty” is not quite fitting. I definitely have a sense that this story can play some beneficial role in promoting a more healthy sexuality in our society—though perhaps my framing about puberty is misplaced?
huh, thanks for the engagement guys—I definitely didn’t anticipate this to be so triggering…
I’m hearing two separate points here: 1) magic creatures and fairy tails do more to confuse rather than clarify; 2) let’s be careful not to scare kids about sex nor make it a bigger deal than it already is. I think we could have a rich discourse about each of these, and I see many arguments to be made for both sides—with neither being a clearly resolved issue, imho. Just as an example, here are some possible counters I see to these:
1) What role do fairy tails and lore play in our education and building understanding? For one, “all models are wrong, some are useful”—so I don’t think that whether Santa exists or not is really the interesting question, I’d rather ask in what ways is it helpful / confusing? As far as story-telling is a good vehicle for humans to convey values and information, it serves its purpose. As far as lying to kids—I’d say we can keep Santa without claiming things about him that aren’t true. I think another important purpose of such lore is ritual—of which Christmas is an example. Ritual practices have a clear role and impact on people, that can be cognitively very beneficial if not abused.
2) Yes, sex may already “too big of a deal,” but not in ways that are constructive / helpful. The hormonal impact of sex on our mind itself is hard to overstate—it really is a huge deal, for some people more than others. Since this is a question of qualia, I can reliably talk only about personal experience—and in retrospect I see that it ran my life for a number of years, the more so the more I repressed it. Learning to sublimate that energy, and really enjoy it in areas of life outside of sex has been the single greatest shift I experienced in persistent personal happiness, energy, and productivity. And this is what I’m referring to in this story—to me, sex and its broader impact is the most magical thing I have experienced in life, and so if anything is worth calling magical, I’d say this is it.
Of course, both of these points are a biased side of the full story, and I wouldn’t personally 100% agree with these, as reality is always more subtle and balanced than such arguments. If you like, check out some other, perhaps more scientific discussions I wrote around related topics:
a rationalist perspective on “magic”: https://www.lesswrong.com/posts/uRiiNMCDdNnGo3Lqa/magic-tricks-and-high-dimensional-configuration-spaces
Is Santa Real—as an effective theory: https://www.pchvykov.com/post/is-santa-real
Sex Fairy Lore
oh yeah, I’ve seen that one before—really awesome stuff! I guess you could say the goalkeeper discovers a “mental” dimension whereby it can beat the attacker easier than if it uses the “physical” dimensions of directly blocking.
This all also feels related to Goodhart’s law—though subtly different...
Can you please commercialize this gem? I (and probably many others) would totally buy it—but making it myself is a bit of a hurdle...