It might be that the race for AGI gets replaced with the race for market dominance, and major companies stop optimising in the direction of more intelligence. Unlikely I think, but could potentially be good in the Pause AI sense.
Sycophantic models aren’t necessarily less intelligent. Instead, they use their intelligence to model the user and their preferences. E.g. I expect a properly trained GPT-4 > GPT-3 > GPT-2 at sycophancy. So even if labs started optimizing for this, I would expect them still to be incentivized towards scaling up models and capabilities.
Doesn’t matter that much because Meta/XAI or some other company building off open source models will choose the sycophancy option.
You assume “no sycophancy” was the right option.
It might be that the race for AGI gets replaced with the race for market dominance, and major companies stop optimising in the direction of more intelligence. Unlikely I think, but could potentially be good in the Pause AI sense.
Sycophantic models aren’t necessarily less intelligent. Instead, they use their intelligence to model the user and their preferences. E.g. I expect a properly trained GPT-4 > GPT-3 > GPT-2 at sycophancy. So even if labs started optimizing for this, I would expect them still to be incentivized towards scaling up models and capabilities.
Good point. Perhaps it would be better to say they’ll stop focussing on IMOs and coding tasks so much?