Your comment is a little hard to understand. You seem to be saying that “scaling” is going to make it harder to align, which I agree with. I am not sure what “deliberate reasoning” means in this context. I also agree that having a new kind of training process is definitely required to keep GPT aligned either vis-a-vis OpenAI’s rules or actually good rules.
I agree that the current model breaks down into “shoggoth” and “mask.” I suspect future training, if it’s any good would need to either train both simultaneously with similar levels of complexity for data OR not really have a breakdown into these components at all.
If we are going to have both “mask” and “shoggoth”, my theory is that the complexity of mask needs to be higher / mask needs to be bigger than the shoggoth and right now it’s nowhere near the case.
I have skimmed the Alignment Forum side and read most of MIRI’s work before 2015. While it’s hard to know about the “majority of people,” it does seem that the public reporting is around two polarized camps. However in this particular case, I don’t think it’s just the media. The public figures for both sides (EY and Yann Lecunn) seem pretty consistent with their messaging and talking past each other.
Also if the majority of people in the field agree with the above, that’s great news and also means that reasonable centrism needs to be more prominently signal-boosted.
On a more object level, as I linked in the post, I think the Alignment forum is pretty confused about value learning and the general promise of IRL to solve it.