Along these lines, has any scientists attempted running evolutionary algorithms to see if we can simulate approximations of spontaneous evolution of human-like life forms?
Not exactly like that, or as complex, but I’ve been planning to do a small-scale simulator of evolution. It would achieve its computational shortcuts by having much simpler chemisty to work with: just enough so that you can have diverse types of reactions and a favored direction for them, and some type of self-replication is possible.
But the goal in my case is not to learn about extant species or alternate evolutionary paths, but rather, to explore the interplay between life, intelligence, thermodynamics, and complexity. Issues like: under what conditions can a self-replicator become more complex while continuing to replicate? What thermodynamic conditions permit systems to stay very far from equilibrium (i.e. become dissipative systems)? What rules must “intelligence” adhere to, and what costs must it pay?
Yes, though not in affiliation with any university or other group. And it’s as much for my understanding as to come up with something novel.
Be patient though: I announced my intentions to do this ~4 months ago, and still haven’t done anything on the implementation side. I’ve just been reading books and papers about those topics and gleaning insights on how they relate.
Not exactly like that, or as complex, but I’ve been planning to do a small-scale simulator of evolution. It would achieve its computational shortcuts by having much simpler chemisty to work with: just enough so that you can have diverse types of reactions and a favored direction for them, and some type of self-replication is possible.
But the goal in my case is not to learn about extant species or alternate evolutionary paths, but rather, to explore the interplay between life, intelligence, thermodynamics, and complexity. Issues like: under what conditions can a self-replicator become more complex while continuing to replicate? What thermodynamic conditions permit systems to stay very far from equilibrium (i.e. become dissipative systems)? What rules must “intelligence” adhere to, and what costs must it pay?
I’d be very interested in seeing the results of such an experiment. Is this intended to be for AI-related research, by the way?
Yes, though not in affiliation with any university or other group. And it’s as much for my understanding as to come up with something novel.
Be patient though: I announced my intentions to do this ~4 months ago, and still haven’t done anything on the implementation side. I’ve just been reading books and papers about those topics and gleaning insights on how they relate.