Short of the redundancy and letdown ending, I like this writing as it captures echoes of reasoning failure (that I find myself fall for at times); seems to be written not just for AI researchers (& adjacents) in 2025, but for many minds across “now and then”.
It strikes me that Humman does grasp truths (reality is complicated and people do have different strengths) but errs “true at this/his resolution” for “true at all scales”. Feels like he assumes self-similar scaling (like a tree) instead of considering the nature of realities that scale in non-self-similar ways (like a snowflake*, with shifts from dendritic to radial structures). More so, he uses his understanding of complexity as a thought-terminating invocation rather than a call to deeper/clearer/coherent-er modeling(s). Both are fairly common failure modes and it would be cool to leave them behind, but I’m unaware of stable ways to do so.
*not meaning to use the snowflake example as a supportive argument for his position ~ “every snowflake is unique and incomparable”, tho I do like that in my best of days.
Updated a bit on the self-similar vs non-self similar scaling; I’m more unsure than I previously thought that I understand how scaling works, from individual to different types of collectives + time dynamics.
Short of the redundancy and letdown ending, I like this writing as it captures echoes of reasoning failure (that I find myself fall for at times); seems to be written not just for AI researchers (& adjacents) in 2025, but for many minds across “now and then”.
It strikes me that Humman does grasp truths (reality is complicated and people do have different strengths) but errs “true at this/his resolution” for “true at all scales”. Feels like he assumes self-similar scaling (like a tree) instead of considering the nature of realities that scale in non-self-similar ways (like a snowflake*, with shifts from dendritic to radial structures). More so, he uses his understanding of complexity as a thought-terminating invocation rather than a call to deeper/clearer/coherent-er modeling(s). Both are fairly common failure modes and it would be cool to leave them behind, but I’m unaware of stable ways to do so.
*not meaning to use the snowflake example as a supportive argument for his position ~ “every snowflake is unique and incomparable”, tho I do like that in my best of days.
Updated a bit on the self-similar vs non-self similar scaling; I’m more unsure than I previously thought that I understand how scaling works, from individual to different types of collectives + time dynamics.