Implications of GPT-2

I was im­pressed by GPT-2, to the point where I wouldn’t be sur­prised if a fu­ture ver­sion of it could be used pivotally us­ing ex­ist­ing pro­to­cols.

Con­sider gen­er­at­ing half of a Tur­ing test tran­script, the other half be­ing sup­plied by a hu­man judge. If this passes, we could im­me­di­ately im­ple­ment an HCH of AI safety re­searchers solv­ing the prob­lem if it’s within our reach at all. (Note that train­ing the model takes much more com­pute than gen­er­at­ing text.)

This might not be the first pivotal ap­pli­ca­tion of lan­guage mod­els that be­comes pos­si­ble as they get stronger.

It’s a source of su­per­in­tel­li­gence that doesn’t au­to­mat­i­cally run into util­ity max­i­miz­ers. It sure doesn’t look like AI ser­vices, lumpy or no.