It seems to me like there are two separate ideas in this post.
One is HCH itself. Actually, HCH (defined as an infinitely large tree of humans) is a family of schemes with two parameters, so it’s really
HCHh,t
where h is a human, and t the amount of time each human has to think. This is impossible to implement since we don’t have infinitely many copies of the same human—and also because the scheme requires that, every time a human consults a subtree, we freeze time for that human until the subtree has computed the answer. But it’s useful insofar as it can be approximated.
Whether or not there exists any human such that, if we set t to an hour, HCHh,t has superintelligent performance on a question-answering task is unclear and relies one some version of the Factored Cognition hypothesis.
Separately, the HCHx schemes are about implementations, but they still define targets that can only be approximated, not literally implemented.
Taking just the first one—if P is a prediction algorithm, then each HCHPn for n∈N can be defined recursively. Namely, we set HCHP0 to be P‘s learned prediction of h‘s output, and HCHPn to be P‘s learned prediction of HCHPn−1’s output. Each step requires a training process. We could then set HCHP:=limn→∞HCHPn, but this is not literally achievable since it requires infinitely many training steps.
It seems to me like there are two separate ideas in this post.
One is HCH itself. Actually, HCH (defined as an infinitely large tree of humans) is a family of schemes with two parameters, so it’s really
HCHh,t
where h is a human, and t the amount of time each human has to think. This is impossible to implement since we don’t have infinitely many copies of the same human—and also because the scheme requires that, every time a human consults a subtree, we freeze time for that human until the subtree has computed the answer. But it’s useful insofar as it can be approximated.
Whether or not there exists any human such that, if we set t to an hour, HCHh,t has superintelligent performance on a question-answering task is unclear and relies one some version of the Factored Cognition hypothesis.
Separately, the HCHx schemes are about implementations, but they still define targets that can only be approximated, not literally implemented.
Taking just the first one—if P is a prediction algorithm, then each HCHPn for n∈N can be defined recursively. Namely, we set HCHP0 to be P‘s learned prediction of h‘s output, and HCHPn to be P‘s learned prediction of HCHPn−1’s output. Each step requires a training process. We could then set HCHP:=limn→∞HCHPn, but this is not literally achievable since it requires infinitely many training steps.