Hey, thanks for the question! And I’m glad you liked the part about AGZ. (I also found this video by Robert Miles extremely helpful and accessible to understand AGZ)
This seems speculative. How do you know that a hypothetical infinite HCH tree does not depend the capabilities of the human?
Hm, I wouldn’t say that it doesn’t depend on the capabilities of the human. I think it does, but it depends on the type of reasoning they employ and not e.g. their working memory (to the extent that the general hypothesis of factored cognition holds that we can successfully solve tasks by breaking them down into smaller tasks.)
HCH does not depend on the starting point (“What difficulty of task can Rosa solve on her own?”)
The way to best understand this is maybe to think in terms of computation/time to think. What kind of tasks Rosa can solve obviously depends a lot on how much computation/time they have to think about it. But for the final outcome of HCH, it shouldn’t matter if we half the computation/time the first node has (at least down to a certain level of computation/time) since the next lower node can just do the thinking that the first node would have done with more time. I guess this assumes that the way the first node would benefit from more time would be making more quantitative progress as opposed to qualitative progress. (I think I tried to capture quality with ‘type of reasoning process’.)
Sorry, this answer is a bit rambly, I can spend some more time on an answer if this doesn’t make sense! (There’s also a good chance this doesn’t make sense because it just doesn’t make sense/I misunderstand stuff and not just because I explain it poorly)