This makes sense as a crux for the claim “we need philosophical competence to align unboundedly intelligent superintelligences.” But, it doesn’t make sense for the claim “we need philosophical competence to align general, openended intelligence.”
I was thinking of a slightly broader claim: “we need extreme philosophical competence”. If I thought we had to use human labor to align wildly superhuman AIs, I would put much more weight on “extreme philosophical competence is needed”. I agree that “we need philosophical competence to align any general, openended intelligence” isn’t affected by the level of capability at handoff.
I might buy that you and Buck are competent enough here to think clearly about it (not sure. I think you benefit from having a number of people around who seem likely to help), but I would bet against Anthropic decisionmakers being philosophically competent enough.
I think there might be a bit of a (presumably unintentional) motte and bailey here where the motte is “careful conceptual thinking might be required rather than pure naive empiricism (because we won’t have good enough test beds by default) and it seems like Anthropic (leadership) might fail heavily at this thinking” and the bailey is “extreme philosophical competence (e.g. 10-30 years of tricky work) is pretty likely to be needed”.
I buy the motte here, but not the bailey. I think the motte is a substantial discount on Anthropic from my perspective, but I’m kinda sympathetic to where they are coming from. (Getting conceptual stuff and futurism right is real hard! How would they know who to trust among people disagreeing wildly!)
And ultimately, what matters is “does Anthropic leadership go forward with the next training run”, so it matters whether Anthropic leadership buys arguments from hypothetically-competent-enough alignment/interpretability people.
I don’t think “does anthropic stop (at the right time)” is the majority of the relevance of careful conceptual thinking from my perspective. Probably more of it is “do they do a good job allocating their labor and safety research bets”. This is because I don’t think they’ll have very much lead time if any (median −3 months) and takeoff will probably be slower than the amount of lead time if any, so pausing won’t be as relevant. Correspondingly, pausing at the right time isn’t the biggest deal relative to other factors, though it does seem very important at an absolute level.
I think there might be a bit of a (presumably unintentional) motte and bailey here where the motte is “careful conceptual thinking might be required rather than pure naive empiricism (because we won’t be given good enough test beds by default) and it seems like Anthropic (leadership) might fail heavily at this” and the bailey is “extreme philosophical competence (e.g. 10-30 years of tricky work) is pretty likely to be needed”.
Yeah I agree that was happening somewhat. The connecting dots here are “in worlds where it turns out we need a long Philosophical Pause, I think you and Buck would probably be above some threshold where you notice and navigate it reasonably.”
I think my actual belief is “the Motte is high likelihood true, the Bailey is… medium-ish likelihood true, but, like, it’s a distribution, there’s not a clear dividing line between them”
I also think the pause can be “well, we’re running untrusted AGIs and ~trusted pseudogeneral LLM-agents that help with the philosophical progress, but, we can’t run them that long or fast, they help speed things up and make what’d normally be a 10-30 year pause into a 3-10 year pause, but also the world would be going crazy left to it’s own devices, and the sort of global institutional changes necessary are still similarly-outside-of-overton window as a 20 year global moratorium and the “race with China” rhetoric is still bad.
I was thinking of a slightly broader claim: “we need extreme philosophical competence”. If I thought we had to use human labor to align wildly superhuman AIs, I would put much more weight on “extreme philosophical competence is needed”. I agree that “we need philosophical competence to align any general, openended intelligence” isn’t affected by the level of capability at handoff.
I think there might be a bit of a (presumably unintentional) motte and bailey here where the motte is “careful conceptual thinking might be required rather than pure naive empiricism (because we won’t have good enough test beds by default) and it seems like Anthropic (leadership) might fail heavily at this thinking” and the bailey is “extreme philosophical competence (e.g. 10-30 years of tricky work) is pretty likely to be needed”.
I buy the motte here, but not the bailey. I think the motte is a substantial discount on Anthropic from my perspective, but I’m kinda sympathetic to where they are coming from. (Getting conceptual stuff and futurism right is real hard! How would they know who to trust among people disagreeing wildly!)
I don’t think “does anthropic stop (at the right time)” is the majority of the relevance of careful conceptual thinking from my perspective. Probably more of it is “do they do a good job allocating their labor and safety research bets”. This is because I don’t think they’ll have very much lead time if any (median −3 months) and takeoff will probably be slower than the amount of lead time if any, so pausing won’t be as relevant. Correspondingly, pausing at the right time isn’t the biggest deal relative to other factors, though it does seem very important at an absolute level.
Yeah I agree that was happening somewhat. The connecting dots here are “in worlds where it turns out we need a long Philosophical Pause, I think you and Buck would probably be above some threshold where you notice and navigate it reasonably.”
I think my actual belief is “the Motte is high likelihood true, the Bailey is… medium-ish likelihood true, but, like, it’s a distribution, there’s not a clear dividing line between them”
I also think the pause can be “well, we’re running untrusted AGIs and ~trusted pseudogeneral LLM-agents that help with the philosophical progress, but, we can’t run them that long or fast, they help speed things up and make what’d normally be a 10-30 year pause into a 3-10 year pause, but also the world would be going crazy left to it’s own devices, and the sort of global institutional changes necessary are still similarly-outside-of-overton window as a 20 year global moratorium and the “race with China” rhetoric is still bad.