But ‘alignment is tractable when you actually work on it’ doesn’t imply ‘the only reason capabilities outgeneralized alignment in our evolutionary history was that evolution was myopic and therefore not able to do long-term planning aimed at alignment desiderata’.
I am not claiming evolution is ‘not able to do long-term planning aimed at alignment desiderata’.
I am claiming it did not even try.
If you’re myopically optimizing for two things (‘make the agent want to pursue the intended goal’ and ‘make the agent capable at pursuing the intended goal’) and one generalizes vastly better than the other, this points toward a difference between the two myopically-optimized targets.
This looks like a strong steelman of the post, which I gladly accept.
But it seemed to me that the post was arguing:
1. That alignment was hard (it mentions that technical alignment contains the hard bits, multiple specific problems in alignment), etc.
2. That current approaches do not work
That you do not get alignment by default looks like a much weaker thesis than 1&2, one that I agree with.
This would obviously be an incredibly positive development, and would increase our success odds a ton! Nate isn’t arguing ‘when you actually try to do alignment, you can never make any headway’.
This unfortunately didn’t answer my question. We all agree that it would be a positive development, my question was how much. But from my point of view, it could even be enough.
The question that I was trying to ask was: “What is the difficulty ratio that you see between alignment and capabilities?”
I understood the post as making a claim (among others) that “Alignment is very more difficult than capabilities, as evidenced by Natural Selection”.
Many comparisons are made with Natural Selection (NS) optimizing for IGF, on the grounds that this is our only example of an optimization process yielding intelligence.
I would suggest considering one very relevant fact: NS has not optimized for alignment, but only for a myopic version of IGF. I would also suggest considering that humans have not optimized for alignment either.
Let’s look at some quotes, with those considerations in mind:
NS has not optimized for alignment, which is why it’s bad at alignment compared to what it has optimized for.
NS has not optimized for one intelligence not conquering the rest of the world. As such, it doesn’t say anything about how hard it is to optimize to produce one intelligence not conquering the rest of the world.
The response is not that NS is not intelligent, but that NS has not even optimized for any of the things that you have pointed to.
My answer would be the same for NS and humans: alignment is simply not optimized for! People spend countless more resources on capabilities than alignment.
If the resources invested in capability vs alignment ratio was reversed, would you still expect alignment to fare so much worse than capabilities?
Let’s say you’d still expect that: how much better do you expect the situation to be as a result of the ratio being reversed? How much doom would you still expect in that world compared to now?
Sure, in so far as people will optimize for short-term power (ie, capabilities) because they are myopic and power is the name we give to what is useful in most scenarios.
---
I also expect a discontinuity in intelligence. But I think this post does not make a good case for it: a much simpler theory already explains its observations.
I’m very eager to read this.