This seems like a testable hypothesis. What would it take to train a GPTx on Eliezer’s writings and compare its output with the original? And then check if the EliezerGPT is immeasurably smarter than the original?
Alternatively, since predicting Eliezer is in a way like inverting a one-way function, GPTx might top out way below the reasonably accurate predictability level, unless P=NP.
This seems like a testable hypothesis. What would it take to train a GPTx on Eliezer’s writings and compare its output with the original? And then check if the EliezerGPT is immeasurably smarter than the original?
Alternatively, since predicting Eliezer is in a way like inverting a one-way function, GPTx might top out way below the reasonably accurate predictability level, unless P=NP.