That paper makes perfect sense in terms of universe modeling by agents constantly interacting with other similar agents they do not fully understand.
I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
I think this is a counter-intuitive and underappreciated point worth explicating and publishing, actually.
That paper makes perfect sense in terms of universe modeling by agents constantly interacting with other similar agents they do not fully understand.
I think this is a counter-intuitive and underappreciated point worth explicating and publishing, actually.
yeah, I thought so too—but I only had very preliminary results, not enough for a publication… but perhaps I could write up a post based on what I had
Definitely worth starting with a post, and see where it goes.
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency