You are aware chatbots have been “beating” the original Turing test since 2014, right? (And arguably even before)
Also, AGI-complete == fools 1⁄3 of human judges in an x minute conversation via text? Ahm, no, just no.
That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for “AGI complete”. AGI is more of a generic term used to mean “kinda like a human”, but it’s not very concrete.
On the whole, yes, some games might prove too difficult for RL to beat… but I can’t think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I’m sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human… but that’s another story. Turing test, which is to say AGI-complete
Yes, I was in fact. Seeing where this internet argument is going, I think it’s best to leave it here.
So, in that case.
If your original chain of logic is:
1. An RL-based algorithm that could play any game could pass the turing test
2. An algorithm that can pass the Turing test is “AGI complete”, thus it is unlikely that (1) will happen soon
And you agree with the statement:
3. An algorithm did pass the Turing test in 2014
You either:
a) Have a contradiction
b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)
I assume your position here is b and I’d love to hear it.
I’d also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)
If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.
You are aware chatbots have been “beating” the original Turing test since 2014, right? (And arguably even before)
Also, AGI-complete == fools 1⁄3 of human judges in an x minute conversation via text? Ahm, no, just no.
That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for “AGI complete”. AGI is more of a generic term used to mean “kinda like a human”, but it’s not very concrete.
On the whole, yes, some games might prove too difficult for RL to beat… but I can’t think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I’m sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human… but that’s another story. Turing test, which is to say AGI-complete
Yes, I was in fact. Seeing where this internet argument is going, I think it’s best to leave it here.
So, in that case.
If your original chain of logic is:
1. An RL-based algorithm that could play any game could pass the turing test
2. An algorithm that can pass the Turing test is “AGI complete”, thus it is unlikely that (1) will happen soon
And you agree with the statement:
3. An algorithm did pass the Turing test in 2014
You either:
a) Have a contradiction
b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)
I assume your position here is
b
and I’d love to hear it.I’d also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)
If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.