Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com
pchvykov
Aura as a proprioceptive glitch
Mindfulness as debugging
Does butterfly affect?
Mistakes as agency
Our compressed perception
A physicist’s approach to Origins of Life
Values Darwinism
Can we grow cars instead of building them?
Doing “good”
Magic, tricks, and high-dimensional configuration spaces
Designing environments to select designs
Is social theory our doom?
Quantum Darwinism, social constructs, and the scientific method
yeah, I can try to clarify some of my assumptions, which probably won’t be fully satisfactory to you, but a bit:
I’m trying to envision here a best-possible scenario with AI, where we really get everything right in the AI design and application (so yes, utopian)
I’m assuming that the question “is AI conscious?” to be fundamentally ill-posed as we don’t have a good definition for consciousness—hence I’m imagining AI as merely correlation-seeking statistical models. With this, we also remove any notion of AI having “interests at heart” or doing anything “deliberately”
and so yes, I’m suggesting that humans may be having too much fun to reproduce with other humans, nor will feel much need to. It’s more a matter of a certain carelessness, than deliberate suicide.
Thanks for your interest—really nice to hear! here is a link to the videos (and supplement): https://science.sciencemag.org/content/suppl/2020/12/29/371.6524.90.DC1
Building cars we don’t understand
Emotional microscope
A gentle apocalypse
yeah, that could be a cleaner line of argument, I agree—though I think I’d need to rewrite the whole thing.
For testable predictions… I could at least see models of extreme cases—purely physical or purely memetic selection—and perhaps being able to find real-world example where one or the other or neither is a good description. That could be fun
I’m really excited about this post, as it relates super closely to a recent paper I published (in Science!) about spontaneous organization of complex systems—like when a house builds itself somehow, or utility self-maximizes just following natural dynamics of the world. I have some fear of spamming, but I’m really excited others are thinking along these lines—so I wanted to share a post I wrote explaining the idea in that paper https://medium.com/bs3/designing-environments-to-select-designs-339d59a9a8ce
Would love to hear your thoughts!