It’s cool to see that someone (you) is making an earnest effort to formulate and attempt to answer questions usually skirted by the mainstream, and not shy from concrete examples rather than sticking with abstract theorizing. The kicked rotor example is a great way to start with a simple model and seeing what goes wrong (or right) for an unusual model of “life”. My personal go-to examples are stars (which are born, evolve, procreate, die, fight entropy for a time, and generally fit every criterion you listed), solar flares, and… boiling water bubbles.
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
That paper makes perfect sense in terms of universe modeling by agents constantly interacting with other similar agents they do not fully understand.
I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
I think this is a counter-intuitive and underappreciated point worth explicating and publishing, actually.
It’s cool to see that someone (you) is making an earnest effort to formulate and attempt to answer questions usually skirted by the mainstream, and not shy from concrete examples rather than sticking with abstract theorizing. The kicked rotor example is a great way to start with a simple model and seeing what goes wrong (or right) for an unusual model of “life”. My personal go-to examples are stars (which are born, evolve, procreate, die, fight entropy for a time, and generally fit every criterion you listed), solar flares, and… boiling water bubbles.
Also, I suspect that “life” as an effective description is not much different from agency as an effective description: https://www.lesswrong.com/posts/NptifNqFw4wT4MuY8/agency-is-bugs-and-uncertainty
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
That paper makes perfect sense in terms of universe modeling by agents constantly interacting with other similar agents they do not fully understand.
I think this is a counter-intuitive and underappreciated point worth explicating and publishing, actually.
yeah, I thought so too—but I only had very preliminary results, not enough for a publication… but perhaps I could write up a post based on what I had
Definitely worth starting with a post, and see where it goes.
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency