Oh geez, looking back at my comment I was extremely unclear. Sorry about that.
Probably not useful as feedback, but the specific things I’m most interested in here are your conclusions. Like, “Not gonna justify this yet, but I think rationalists are susceptible to getting seduced by witches in ways that will turn their lives upside down. The abstractions that predict this are after the fold, and you gotta apply it to your own raw data”. I’m mostly curious about this because I’m trying to figure out how similar our perspectives are. The more similar our conclusions, the more it seems like “Water in the eggplant” type stuff is true and important just not for me. The more dissimilar, the more I have to wonder “Wait, what do you actually mean by that. I must be missing some patterns he’s matching while thinking I get it”.
Separately from that, and what may or may not be useful, is that in general I find it helpful to have more concrete applications spelled out. Not raw data necessarily as the point isn’t to fuel independent abstraction, but a minimal set of simulated/curated data to highlight the connection between your abstractions and concrete use cases. Sounds like you’re mostly on board with this though, at least in theory.
I’m with you on the “I just wanna tell you about the cool abstractions I figured out!” thing by the way, hehe. It’s a lot easier, and more fun, and genuinely worth doing first I think… just also a lot harder to get through to people IME because grounding the abstractions is much harder than holding them in the abstract.
Oh geez, looking back at my comment I was extremely unclear. Sorry about that.
Probably not useful as feedback, but the specific things I’m most interested in here are your conclusions. Like, “Not gonna justify this yet, but I think rationalists are susceptible to getting seduced by witches in ways that will turn their lives upside down. The abstractions that predict this are after the fold, and you gotta apply it to your own raw data”. I’m mostly curious about this because I’m trying to figure out how similar our perspectives are. The more similar our conclusions, the more it seems like “Water in the eggplant” type stuff is true and important just not for me. The more dissimilar, the more I have to wonder “Wait, what do you actually mean by that. I must be missing some patterns he’s matching while thinking I get it”.
Separately from that, and what may or may not be useful, is that in general I find it helpful to have more concrete applications spelled out. Not raw data necessarily as the point isn’t to fuel independent abstraction, but a minimal set of simulated/curated data to highlight the connection between your abstractions and concrete use cases. Sounds like you’re mostly on board with this though, at least in theory.
I’m with you on the “I just wanna tell you about the cool abstractions I figured out!” thing by the way, hehe. It’s a lot easier, and more fun, and genuinely worth doing first I think… just also a lot harder to get through to people IME because grounding the abstractions is much harder than holding them in the abstract.