I like this point a lot, and your model of me is accurate, at least insofar as I’m capable of simming this without actually experiencing it. For instance, I have similar thoughts about some of my cutting/oversimplifying black-or-white heuristics, which seem less good than the shades-of-gray epistemics of people around me, and yet often produce more solid results. I don’t conclude from this that those heuristics are better, but rather that I should be confused about my model of what’s going on.
that makes a ton of sense for theoretically justified reasons I don’t know how to explain yet. anyone want to collab with me on a sequence? I’m a bit blocked on 1. exactly what my goal is and 2. what I should be practicing in order to be able to write a sequence (given that I’m averse to writing post-style content right now)
I like this point a lot, and your model of me is accurate, at least insofar as I’m capable of simming this without actually experiencing it. For instance, I have similar thoughts about some of my cutting/oversimplifying black-or-white heuristics, which seem less good than the shades-of-gray epistemics of people around me, and yet often produce more solid results. I don’t conclude from this that those heuristics are better, but rather that I should be confused about my model of what’s going on.
that makes a ton of sense for theoretically justified reasons I don’t know how to explain yet. anyone want to collab with me on a sequence? I’m a bit blocked on 1. exactly what my goal is and 2. what I should be practicing in order to be able to write a sequence (given that I’m averse to writing post-style content right now)