Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com
pchvykov
yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.
Thanks for your interest—really nice to hear! here is a link to the videos (and supplement): https://science.sciencemag.org/content/suppl/2020/12/29/371.6524.90.DC1
yeah, that could be a cleaner line of argument, I agree—though I think I’d need to rewrite the whole thing.
For testable predictions… I could at least see models of extreme cases—purely physical or purely memetic selection—and perhaps being able to find real-world example where one or the other or neither is a good description. That could be fun
Interesting point—that adds a whole other layer of complexity to the argument, which feels a bit daunting to me to even start dissecting.
Still, could we say that in the standard formulation of Darwinian selection, where only the “fittest” survives, the victim is really considered to be dead and gone? I think that at least in the model of Darwinism this is the case. So my goal in this post is to push back on this model. You give a slightly different angle to also push back on this model. I.e., whether intentional or accidental, when one culture defeats another, it takes on attributes of the victim—and therefore some aspects of the victim live on, modifying the dynamics of “natural selection.”
As to whether it’s a good thing—well, the whole post starts on moral relativism, so I don’t want to suddenly bring in moral judgements at this point. It’s an interesting question, and I think you could make the argument either way.
Thanks for your comment!
From this and other comments, I get the feeling I didn’t make my goal clear: I’m trying to see if there is any objective way to define progress / values (starting from assuming moral relativism). I’m not tryin to make any claim as to what these values should be. Darwinian argument is the only one I’ve encountered that made sense to me—and so here I’m pushing back on it a bit—but maybe there are other good ways to objectively define values?
Imho, we tend to implicitly ground many of our values in this Darwinian perspective—hence I think it’s an important topic.
I like what you point out about the distinction between prescriptive vs descriptive values here. Within moral relativism, I guess there is nothing to say about prescriptive values at all. So yes, Darwinism can only comment on descriptive values.
However, I don’t think this is quite the same as the fallacies you mention. “Might makes right” (Darwinian) is not the same as “natural makes right”—natural is a series of historical accidents, while survival of the fittest is a theoretical construct (with the caveat that at the scale of nations, number of conflicts is small, so historical accidents could become important in determining “fittest”). Similarly, “fittest” as determined by who survives seems like an objective fact, rather than a mind projection (with the caveat that an “individual” may be a mind projection—but I think that’s a bit deeper).
Yes! and here we are trying to study the spectral properties of said noise to try to reverse-engineer your radio, as well as understand the properties of electromagnetic field itself. So perhaps that’s one way to look at the practice :)
cool—and I appreciate that you think my posts are promising! I’m never sure if my posts have any meaningful ‘delta’ - seems like everything’s been said before.
But this community is really fun to post for, with meaningful engagement and discussion =)
whow, some Bayesian updating there—impressive! :)
I’m not sure why this was crossed out—seems quite civil to me… And I appreciate your thoughts on this!
I do think we agree at the big-picture level, but have some mismatch in details and language. In particular, as I understand J. Pearl’s counter-factual analysis, you’re supposed to compare this one perturbation against the average over the ensemble of all possible other interventions. So in this sense, it’s not about “holding everything else fixed,” but rather about “what are all the possible other things that could have happened.”
Cool—thanks for your feedback! I agree that I could be more rigorous with my terminology. Nonetheless, I do think I have a rigorous argument underneath all this—even if it didn’t come across. Let me try to clarify:
I did not mean to refer to human intentionality anywhere here. I was specifically trying to argue that the “chaos-theory definition of causality” you give, while great in idealized deterministic systems, is inadequate in complex messy “real world.” Instead, the rigorous definition I prefer is the counter-factual information theoretic one, developed by Judea Pearl, and which I here tried to outline in layman’s terms. This definition is entirely ill-posed in a deterministic chaotic system, but will work as soon as we have any stochasticity (from whatever source).Does this address your point at all, or am I off-base?
Thanks for expanding on this stuff—really nice discussion!
Yeah that stock-market analogy is quite tantalizing—and I like the breadth that it could apply to.
For your discussion on “unnatural”—sure, I agree with the sentiment—but it’s the question of how to formalize this all so that it can produce a testable, falsifiable theory that I’m unclear on. Poetically it’s all great—and I enjoy reading philosophical treatise on this—but they always leave me wanting, as I don’t get something to hold onto at the end, something I can directly and tangeably apply to decision-making.
For your last paragraph, yeah that emphasis on “relational” perspective of reality is what I’m trying to build up and formalize in this post. And yes, it’s a bit hypocritical to say that “ultimately reality is relational” ;P
Thanks for sharing your thoughts—cool ideas!
Yes, I’ve actually thought that human interactions may be well modeled as a stock-market… never actually looked into whether this has been done though. And yes, maybe such model could be framed using this network-type setup I described… could be interesting—what if different cliques have different ‘stock’ valuation?
″...the more unnatural said law is.”—the word ‘natural’ is a bit of a can of worms… I guess your statement could be viewed as an interesting definition of ‘natural’? E.g., in nonequilibrium stat mech you can quantify a lower-bound on energy expenditure to keep something away from the equilibrium distribution. E.g., I’ve thought of applying this to quantify minimum welfare spending needed to keep social inequality below some value. But here maybe you’re thinking more general? I just think ‘natural’ or ‘real self’ are really slippery notions to define. E.g., is all life inherently unnatural since it requires energy expenditure to exist?
“As if the brain experiences a linear combination of conflicting things.”—that’s precisely the sort of observations that Quantum Cognition models using quantum-like state-vectors. And precisely the sort of thing this framework I’m describing could help to explain perhaps.
“It feels sort of like a set trying to put itself inside itself?”—nice one! And there was a time when ancient Greek philosophers conclusively ‘proved’ to themselves the impossibility of ever fully understanding what matter is made of, and figured it’s better to spend time on moral philosophy. Now, the former is basically solved, and the latter is still very much open. So I don’t buy into no-go theorems much...
Thanks for your comments! I’m having a bit of trouble clearly seeing your core points here—so forgive me if I misinterpret, or address something that wasn’t core to your argument.
To the first part, I feel like we need to clearly separate QM itself (Copenhagen), different Quantum Foundation theories, and Quantum Darwinism specifically. What I was saying is specifically about how Quantum Darwinism views things (in my understanding) - and since interpretations of QM are trying to be more fundamental than QM itself (since QM should be derived from them), we can’t use QM arguments here. So QD says that (alive, dead) is the complete list because of consensus (i.e., in this view, there isn’t anything more fundamental than consensus).
I don’t think I agree with (or don’t understand what you mean by) “including the superposition of dead and alive leads to actual physical consequences”—bomb-testing result is consequence of standard QM, so it doesn’t prove anything “new.”
To the second part, I implicitly meant that reproducibility could mean wither deterministic (reproducibility of a specific outcome), or statistical (reproducibility of a probability of an outcome over many realizations) - I don’t really see those two as fundamentally different. In either case, we think of objective truth (whether probabilistic or deterministic) as something derived from reproducible—so, for example, excluding Knightian uncertainty.
I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I’m not sure I agree that if we can fully understand the algorithm, then we won’t view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I’d say agency isn’t a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps?
As for other examples of “doing something so well we get a strange feeling,” the chess example wouldn’t be my go-to, since the action space there is somehow “small” since it is discrete and finite. I’m more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer—that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see ‘double tracking’) - the slight differences make the sound “bigger” and “more alive.” Etc, etc.
Does this make sense?
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
I really appreciate your care in having a supportive tone here—it is a bit heart aching to read some of the more directly critical comments.
great point about the non-consentual nature of Ea’s actions—it does create a dark undertone to the story, and needs either correcting, or expanding (perhaps framing it as the source of the “shadow of sexuality”—so we might also remember the risks)
the heteronormative line I did notice, and I think could generalize straightforwardly—this was just the simplest place to start. I love your suggestion of “”sex” as acting on a body specifically to produce pleasure in that body.”
And yes, there are definitely many many aspects of sex that can then be addressed within this lore—like rape, consent, STD, procreation, sublimation, psychological impacts, gender, family, etc. Taking the Freudian approach, we could really frame all aspects of human life within this context—could be a fun exercise.
I guess the key hypothesis I’m suggesting here is that explaining the many varied aspects of sexuality in terms of a deity could help to clarify all its complexity—just as the pantheon of gods helped early pagan cultures make sense of the world and make some successful predictions / inventions. It could be nicer to have a science-like explanation, but people would have a harder time keeping that straight (and I believe we don’t yet have enough consensus in psychology as a science anyway).
yeah I don’t know how cultural myths like Santa form or where they start—now they are grounded in rituals, but I haven’t looked at how they were popularized in the first place.
wonderful—thanks so much for the references! “moral case against leaving the house” is a nice example to have in the back pocket :)
Great points—I’m more-or-less on-board with everything you say. Ontology in QM I think is quite inherently murky—so I try to avoid talking about “what’re really real” (although personally I find the Relational QM perspective on this to be most clear—and with some handwaving I could carry it over to QD I think).
Social quantum darwinism—yeah, sounds about right. And yeah, the word “quantum” is a bit ambiguous here—it’s a bit of a political choice whether to use it or avoid it. Although besides superpositions and tensor products, quantum cognition also includes collapse—and that’s now taking quite a few (yes, not all!) ingredients from the quantum playbook to warrant the name?
There can never be an “objective consensus” about what happens in the bomb cavity,
Ah, nice catch—I see your point now, quite interesting. Now I’m curious whether this bomb-testing setup makes trouble for other quantum foundation frameworks too...? As for QD, I think we could make it work—here is a first attempt, let me know what you think (honestly, I’m just using decoherence here, nothing else):
If the bomb is ‘live’, then the two paths will quickly entangle many degree of freedom of the environment, and so you can’t get reproducible records that involve interference between the two branches. If the bomb is “dud”, then the two paths remain contained to the system, and can interfere before making copies of the measurement outcomes.
Honestly, I have a bit of trouble arguing about quantum foundation approaches since they all boil down to the same empirical prediction (sort of by definition), most are inherently not falsifiable—so ultimately feel like a personal preference of what argumentation you find convincing.
Is it not the difference between having intrinsic probability in your definition of reproducibility and not having it?
I just meant that good-old scientific method is what we used to prove classical mechanics, statistical mechanics, and QM. In either case, it’s a matter of anyone repeating the experiment getting the same outcome—whether this outcome is “ball rolls down” or “ball rolls down 20% of the time”. I’m trying to see if we can say something in cases where no outcome is quite reproducible—probabilistic outcome or otherwise. Knightian uncertainty is one way this could happen. Another is cases where we may be able to say something more than “I don’t know, so it’s 50-50”, but where that’s the only truly reproducible statement.
I’m really excited about this post, as it relates super closely to a recent paper I published (in Science!) about spontaneous organization of complex systems—like when a house builds itself somehow, or utility self-maximizes just following natural dynamics of the world. I have some fear of spamming, but I’m really excited others are thinking along these lines—so I wanted to share a post I wrote explaining the idea in that paper https://medium.com/bs3/designing-environments-to-select-designs-339d59a9a8ce
Would love to hear your thoughts!