That’s partly because I’ve never seen a consistent top-to-bottom reasoning for it.
I think it’s difficult to find a consistent top-to-bottom story because the overall argument is disjunctive.
That is, a conjunction is the intersection of different events (“the sidewalk is wet and it’s raining” requires it to both be true that “the sidewalk is wet” and “it’s raining”), whereas a disjunction is the union of different (potentially overlapping) events (“the sidewalk is wet” can be reached by either “the sidewalk is wet and it’s raining” and “the sidewalk is wet and the fire hydrant is leaking”).
So if you have a conclusion, like “autonomous vehicles will be commercially available in 2030”, the more different ways there are for it to be true, the more likely it is. But also, the more different ways there are for it to be true, the less it makes sense to commit to any particular way. “Autonomous cars are commercially available in 2030 because Uber developed them” has more details, but those details are burdensome.
Also, it seems important to point out that the Bostromian position is about the future. That is, the state of autonomous vehicles today can tell you about whether or not they’ll be commercially available in 2030, but there’s no hard evidence and it requires careful reasoning to just get provisional conclusions.
And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030. Which is a very different question from how you should try to solve practical problems now.
I will probably be stealing the perspective of the view being disjunctive as a way to look at why it’s hard to pin down.
And thus, just like the state of neural networks in 2010 was only weakly informative about what would be possible in 2020, it seems reasonable to expect the state of things in 2020 will be only weakly informative and about will be possible in 2030.
This statement I would partially disagree with.
I think the idea of training on a GPU was coming to the forefront by 2010 and also the idea of CNNs for image recognition: https://hal.inria.fr/inria-00112631/document (see both in that 2006 paper)y K. et al. (2006)
I’d argue it’s fairly easy to look at today’ landscape and claim that by 2030 the things that are likely to happen include:
ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn’t), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.
ML achieving all the things we can do with 2d images right now for 3d images and short (e.g. < 5 minute) videos.
Algorithms being able to write e.g. articles summarizing various knowledge it gathers from given sources and possibly even find relevant sources via searching based on keywords (so you could just say “Write an article about Peru’s economic climate in 2028, rather than feed a bunch of articles about Peru’s economy in 2028)… the second part is already doable, but I’m mentioning them together since I assume people will be more impressed with the final product
Algorithms being able to translate from and to almost any language about as well as human, but still not well enough to translate sources which require a lot of interpretation (e.g. yes for translating a biology paper from english to hindi or vice versa, no for translating a phenomenology paper from english to hindi or vice versa)
Controlling mechanical systems (e.g. robotic arms) via networks trained using RL.
Generally speaking, algorithms being used in areas where they already out-perform humans but where regulations and systematic inefficiencies combined with issues of stake don’t currently allow them to be used (e.g. accounting, risk analysis, setting insurance policies, diagnosis, treatment planning). Algorithms being jointly used to help in various scientific fields by replacing the need for humans to use classical statistics and or manually fitting equations in order to model certain processes.
I’d wager points 1 to 4 are basically a given, point 5 is debatable since it depends on human regulators and cultural acceptance for the most part.
I’d also wager than, other than audio processing, there won’t be much innovation beyond those 5 points that will create load of hype by 2030. You might have ensembles of those 4 things building up to something bigger, but those 5 things will be at the core of it.
But that’s just my intuition, partially based on the kind of heuristics above about what is easily doable and what isn’t. But alas, the point of the article was to talk about what’s doable in the present, rather than what to expect from the future, so it’s not really that related.
I disagree with this point in particular. I’m assuming you’re basing this prediction on the recent successes of AlphaStar and OpenAI5, but there are obvious cracks upon closer inspection.
The “any possible game” part, though, is the final nail in the coffin to me since you can conceive plenty of games that are equivalent or similar to the Turing test, which is to say AGI-complete.
(Although I guess AGI-completeness is a much smaller deal to you)
Turing test, which is to say AGI-complete
You are aware chatbots have been “beating” the original Turing test since 2014, right? (And arguably even before) Also, AGI-complete == fools 1⁄3 of human judges in an x minute conversation via text? Ahm, no, just no.
That statement is meaningless unless you define the Turing test and keeps being meaningless even if you define the turing test, there is literally no definition for “AGI complete”. AGI is more of a generic term used to mean “kinda like a human”, but it’s not very concrete.
On the whole, yes, some games might prove too difficult for RL to beat… but I can’t think of any in particular. I think the statement hold for basically any popular competitive game (e.g. one where there are currently cash prizes above > 1000$ to be won). I’m sure one could design an adversarial game specifically designed to not be beaten by RL but doable by a human… but that’s another story. Turing test, which is to say AGI-complete
You are aware chatbots have been “beating” the original Turing test since 2014, right?
Yes, I was in fact. Seeing where this internet argument is going, I think it’s best to leave it here.
So, in that case.
If your original chain of logic is:
1. An RL-based algorithm that could play any game could pass the turing test
2. An algorithm that can pass the Turing test is “AGI complete”, thus it is unlikely that (1) will happen soon
And you agree with the statement:
3. An algorithm did pass the Turing test in 2014
a) Have a contradiction
b) Must have some specific definition of the Turing test under which 3 is untrue (and more generally, no known algorithm can pass the Turing test)
I assume your position here is b and I’d love to hear it.
I’d also love to hear the causal reasoning behind 2. (maybe explained by your definition of the Turing test ?)
If your definitions differ from commonly accepted definitions and your you rely on causality which is not widely implied, you must at least provide your versions of the definitions and some motivation behind the causality.