It’s tempting to think of the model after steps 1 and 2 as aligned but lacking capabilities, but that’s not accurate. It’s safe, but it’s not conforming to a positive meaning of “alignment” that involves solving hard problems in ways that are good for humanity. Sure, it can mouth the correct words about being good, but those words aren’t rigidly connected to the latent capabilities the model has. If you try to solve this by pouring tons of resources into steps 1 and 2, you probably end up with something that learns to exploit systematic human errors during step 2.
It’s tempting to think of the model after steps 1 and 2 as aligned but lacking capabilities, but that’s not accurate. It’s safe, but it’s not conforming to a positive meaning of “alignment” that involves solving hard problems in ways that are good for humanity. Sure, it can mouth the correct words about being good, but those words aren’t rigidly connected to the latent capabilities the model has. If you try to solve this by pouring tons of resources into steps 1 and 2, you probably end up with something that learns to exploit systematic human errors during step 2.