Humans Consulting HCH
(See also: strong HCH.)
Consider a human Hugh who has access to a question-answering machine. Suppose the machine answers question Q by perfectly imitating how Hugh would answer question Q, if Hugh had access to the question-answering machine.
That is, Hugh is able to consult a copy of Hugh, who is able to consult a copy of Hugh, who is able to consult a copy of Hugh…
Let’s call this process HCH, for “Humans Consulting HCH.”
I’ve talked about many variants of this process before, but I find it easier to think about with a nice handle. (Credit to Eliezer for proposing using a recursive acronym.)
HCH is easy to specify very precisely. For now, I think that HCH is our best way to precisely specify “a human’s enlightened judgment.” It’s got plenty of problems, but for now I don’t know anything better.
Elaborations
We can define realizable variants of this inaccessible ideal:
For a particular prediction algorithm P, define HCHᴾ as:
“P’s prediction of what a human would say after consulting HCHᴾ”For a reinforcement learning algorithm A, define max-HCHᴬ as:
“A’s output when maximizing the evaluation of a human after consulting max-HCHᴬ”For a given market structure and participants, define HCHᵐᵃʳᵏᵉᵗ as:
“the market’s prediction of what a human will say after consulting HCHᵐᵃʳᵏᵉᵗ”
Note that e.g. HCHᴾ is totally different from “P’s prediction of HCH.” HCHᴾ will generally make worse predictions, but it is easier to implement.
Hope
The best case is that HCHᴾ, max-HCHᴬ, and HCHᵐᵃʳᵏᵉᵗ are:
As capable as the underlying predictor, reinforcement learner, or market participants.
Aligned with the enlightened judgment of the human, e.g. as evaluated by HCH.
(At least when the human is suitably prudent and wise.)
It is clear from the definitions that these systems can’t be any more capable than the underlying predictor/learner/market. I honestly don’t know whether we should expect them to match the underlying capabilities. My intuition is that max-HCHᴬ probably can, but that HCHᴾ and HCHᵐᵃʳᵏᵉᵗ probably can’t.
It is similarly unclear whether the system continues to reflect the human’s judgment. In some sense this is in tension with the desire to be capable — the more guarded the human, the less capable the system but the more likely it is to reflect their interests. The question is whether a prudent human can achieve both goals.
This was originally posted here on 29th January 2016.
Tomorrow’s AI Alignment Forum sequences will take a break, and tomorrow’s post will be Issue #34 of the Alignment Newsletter.
The next post in this sequence is ‘Corrigibility’ by Paul Christiano, which will be published on Tuesday 27th November.
- (My understanding of) What Everyone in Technical Alignment is Doing and Why by 29 Aug 2022 1:23 UTC; 413 points) (
- The Translucent Thoughts Hypotheses and Their Implications by 9 Mar 2023 16:30 UTC; 139 points) (
- AI Alignment 2018-19 Review by 28 Jan 2020 2:19 UTC; 126 points) (
- Agents Over Cartesian World Models by 27 Apr 2021 2:06 UTC; 66 points) (
- Epistemological Framing for AI Alignment Research by 8 Mar 2021 22:05 UTC; 58 points) (
- Acceptability Verification: A Research Agenda by 12 Jul 2022 20:11 UTC; 50 points) (
- Vaniver’s View on Factored Cognition by 23 Aug 2019 2:54 UTC; 48 points) (
- A newcomer’s guide to the technical AI safety field by 4 Nov 2022 14:29 UTC; 42 points) (
- [AN #135]: Five properties of goal-directed systems by 27 Jan 2021 18:10 UTC; 33 points) (
- [AN #81]: Universality as a potential solution to conceptual difficulties in intent alignment by 8 Jan 2020 18:00 UTC; 32 points) (
- What are the differences between all the iterative/recursive approaches to AI alignment? by 21 Sep 2019 2:09 UTC; 30 points) (
- Alignment Newsletter #48 by 11 Mar 2019 21:10 UTC; 29 points) (
- Alignment Newsletter #45 by 14 Feb 2019 2:10 UTC; 25 points) (
- Alignment Newsletter #34 by 26 Nov 2018 23:10 UTC; 24 points) (
- [AN #146]: Plausible stories of how we might fail to avert an existential catastrophe by 14 Apr 2021 17:30 UTC; 23 points) (
- Alignment Newsletter #41 by 17 Jan 2019 8:10 UTC; 22 points) (
- Epistemology of HCH by 9 Feb 2021 11:46 UTC; 17 points) (
- A newcomer’s guide to the technical AI safety field by 4 Nov 2022 14:29 UTC; 16 points) (EA Forum;
- HCH and Adversarial Questions by 19 Feb 2022 0:52 UTC; 15 points) (
- [AN #133]: Building machines that can cooperate (with humans, institutions, or other machines) by 13 Jan 2021 18:10 UTC; 14 points) (
- [AN #138]: Why AI governance should find problems rather than just solving them by 17 Feb 2021 18:50 UTC; 12 points) (
- [AN #65]: Learning useful skills by watching humans “play” by 23 Sep 2019 17:30 UTC; 11 points) (
- 3 Jul 2020 18:42 UTC; 7 points) 's comment on Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI by (
- 9 May 2024 15:02 UTC; 1 point) 's comment on Abhimanyu Pallavi Sudhir’s Shortform by (
Another question. HCH is defined as a fixed point of a certain process. But that process probably has many fixed points, some of which might be weird. For example, HCH could return a “universal answer” that brainwashes the human using it into returning the same “universal answer”. Or it could be irrationally convinced that e.g. God exists but a proof of that can’t be communicated. How does the landscape of fixed points look like? Since we’ll presumably approximate HCH by something other than actually simulating a lot of people, will the approximation lead to the right fixed point?
Yes, if the queries aren’t well-founded then HCH isn’t uniquely defined even once you specify H, there is a class of solutions. If there is a bad solution, I think you need to do work to rule it out and wouldn’t count on a method magically finding the answer.
It is not at all clear to me how this works. The questions that immediately occur to me are:
How does the recursion bottom out? If real Hugh’s response to the question is to ask the machine, then perfectly simulated Hugh’s response must be the same. If real Hugh’s response is not to ask the machine, then the machine remains unused.
If, somehow, it bottoms out at level n, then Hugh^n must be answering without consulting the HCH. How does that simulated Hugh differ from Hugh^(n-1), that it is able to do something different?
Does Hugh^n know he’s Hugh^n?
If the Hugh^i (i<n) all just relay Hugh^n’s answer, what is gained over Hugh-prime answering directly?
I think there are lots of strategies here that just fail to work. For example, if Hugh passes on the question with no modification, then you build an infinite tower that never does any work.
But there are strategies that do work. For example, whenever Hugh receives a question he can answer, he does so, and whenever he receives a question that is ‘too complicated’, he divides it into subquestions and consults HCH separately on each subquestion, using the results of the consultation to compute the overall answer. This looks like it will terminate, so long as the answers can flow back up the pyramid. Hugh could also pass along numbers about how subdivided a question has become, or the whole stack trace so far, in case there are problems that seem like they have cyclical dependencies (where I want to find out A, which depends on B, which depends on C, which depends on A, which depends on...). Hugh could pass back upwards results like “I didn’t know how to make progress on the subproblem you gave me.”
For example, you could imagine attempting to prove a mathematical conjecture. The first level has Hugh looking at the whole problem, and he thinks “I don’t know how to solve this, but I would know how to solve it if I had lemmas like A, B, and C.” So he asks HCH to separately solve A, B, and C. This spins up a copy of Hugh looking at A, who also thinks “I don’t know how to solve this, but I would if I had lemmas like Aa, Ab, and Ac.” This spins up a copy of Hugh looking at Aa, who thinks “oh, this is solvable like so; here’s a proof of Aa.” Hugh_A is now looking at the proofs, disproofs, and indeterminates of Aa, Ab, and Ac, and now can either write their conclusion about A, or spins up new subagents to examine new subparts of the problem.
Note that in this formulation, you primarily have communication up and down the pyramid, and the communication is normally at the creation and destruction of subagents. It could end up that you prove the same lemma thousands of times across the branches of the tree, because it turned out to be useful in many different places.
So, one way of solving the recursion problem would be for Hugh to never use the machine as a first resort for answering a question Q. Instead, Hugh must resolve to ask the machine only for answers to questions that are “smaller” than Q in some well-ordered sense, and do the rest of the work himself.
But unless the machine is faster at simulating Hugh than Hugh is at being Hugh, it is not clear what is gained. Even if it is, all you get is the same answer that unaided Hugh would have got, but faster.
Without resource constraints, I feel like my intuition kinda slides off the model. Do you have a sense of HCH’s performance under resource constraints? For example, let’s say each human can spend 1 day thinking, make 10 queries to the next level, and there are 10 levels in total. What’s the hardest problem solvable by this setup that you can imagine?
Depends on the human. I think 10 levels with branching factor 10 and 1 day per step is in the ballpark of “go from no calculus to general relativity,” (at least if we strengthen the model by allowing pointers) but it’s hard to know and most people aren’t so optimistic.
Yeah, I don’t know how optimistic I should be, given that one day isn’t enough even to get fluent with calculus. Can you describe the thought process behind your guess? Maybe describe how you imagine the typical days of people inside the tree, depending on the level?
It seems to me like there are two separate ideas in this post.
One is HCH itself. Actually, HCH (defined as an infinitely large tree of humans) is a family of schemes with two parameters, so it’s really
HCHh,t
where h is a human, and t the amount of time each human has to think. This is impossible to implement since we don’t have infinitely many copies of the same human—and also because the scheme requires that, every time a human consults a subtree, we freeze time for that human until the subtree has computed the answer. But it’s useful insofar as it can be approximated.
Whether or not there exists any human such that, if we set t to an hour, HCHh,t has superintelligent performance on a question-answering task is unclear and relies one some version of the Factored Cognition hypothesis.
Separately, the HCHx schemes are about implementations, but they still define targets that can only be approximated, not literally implemented.
Taking just the first one—if P is a prediction algorithm, then each HCHPn for n∈N can be defined recursively. Namely, we set HCHP0 to be P‘s learned prediction of h‘s output, and HCHPn to be P‘s learned prediction of HCHPn−1’s output. Each step requires a training process. We could then set HCHP:=limn→∞HCHPn, but this is not literally achievable since it requires infinitely many training steps.