ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn’t), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.
I disagree with this point in particular. I’m assuming you’re basing this prediction on the recent successes of AlphaStar and OpenAI5, but there are obvious cracks upon closer inspection.
The “any possible game” part, though, is the final nail in the coffin to me since you can conceive plenty of games that are equivalent or similar to the Turing test, which is to say AGI-complete.
(Although I guess AGI-completeness is a much smaller deal to you)
Turing test, which is to say AGI-complete
You are aware chatbots have been “beating” the original Turing test since 2014, right? (And arguably even before) Also, AGI-complete == fools 1⁄3 of human judges in an x minute conversation via text? Ahm, no, just no.
That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for “AGI complete”. AGI is more of a generic term used to mean “kinda like a human”, but it’s not very concrete.
On the whole, yes, some games might prove too difficult for RL to beat… but I can’t think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I’m sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human… but that’s another story. Turing test, which is to say AGI-complete
You are aware chatbots have been “beating” the original Turing test since 2014, right?
Yes, I was in fact. Seeing where this internet argument is going, I think it’s best to leave it here.
So, in that case.
If your original chain of logic is:
1. An RL-based algorithm that could play any game could pass the turing test
2. An algorithm that can pass the Turing test is “AGI complete”, thus it is unlikely that (1) will happen soon
And you agree with the statement:
3. An algorithm did pass the Turing test in 2014
a) Have a contradiction
b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)
I assume your position here is b and I’d love to hear it.
I’d also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)
If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.