If, on the one hand, you had seen that since the 1950′s computer AIs had been capable of beating humans increasingly difficult games and that progress in this domain had been fairly steady and mostly limited by compute power. And moreover that computer Go programs had themselves gone from idiotic to high-amateur level over a course of decades, then the development of alpha-go (if not the exact timing of that development) probably seemed inevitable.
This seems to entirely ignore most (if not all) of the salient implications of AlphaGo’s development. What set AlphaGo apart from previous attempts at computer Go was the iterated distillation and amplification scheme employed during its training scheme. This represents a genuine conceptual advance over previous approaches, and to characterize it as simply a continuation of the trend of increasing strength in Go-playing programs only works if you neglect to define said “trend” in any way more specific than “roughly monotonically increasing”. And if you do that, you’ve tossed out any and all information that would make this a useful and non-vacuous observation.
Shortly after this paragraph, you write:
For the record, I was surprised at how soon Alpha-Go happened, but not that it happened.
In other words, you got the easy and useless part (“will it happen?”) right, and the difficult and important part (“when will it happen?”) wrong. It’s not clear to me why you feel this necessitated mention at all, but since you did mention it, I feel obligated to point out that “predictions” of this caliber are the best you’ll ever be able to do if you insist on throwing out any information more specific and granular than “historically, these metrics seem to move consistently upward/downward”.
In other words, you got the easy and useless part (“will it happen?”) right, and the difficult and important part (“when will it happen?”) wrong.
“Will it happen?” isn’t vacuous or easy, generally speaking. I can think of lots of questions where I have no idea what the answer is, despite a “trend of ever increasing strength”. For example:
Will Chess be solved?
Will faster than light travel be solved?
Will P=NP be solved?
Will the hard problem of consciousness be solved?
Will a Dyson sphere be constructed around Sol?
Will anthropogenic climate change cause Earth’s temperature to rise by 4C?
Will Earth’s population surpass 100 billion people?
Will the African Rhinoceros go extinct?
I feel obligated to point out that “predictions” of this caliber are the best you’ll ever be able to do if you insist on throwing out any information more specific and granular than “historically, these metrics seem to move consistently upward/downward”.
I’ve made specific statements about my beliefs for when Human-Level AI will be developed. If you disagree with these predictions, please state your own.
“Will it happen?” isn’t vacuous or easy, generally speaking. I can think of lots of questions where I have no idea what the answer is, despite a “trend of ever increasing strength”.
In the post, you write:
If, on the one hand, you had seen that since the 1950′s computer AIs had been capable of beating humans increasingly difficult games and that progress in this domain had been fairly steady and mostly limited by compute power. And moreover that computer Go programs had themselves gone from idiotic to high-amateur level over a course of decades, then the development of alpha-go (if not the exact timing of that development) probably seemed inevitable.
“Will it happen?” is easy precisely in cases where a development “seems inevitable”; the hard part then becomes forecasting when such a development will occur. The fact that you (and most computer Go experts, in fact) did not do this is a testament to how unpredictable conceptual advances are, and your attempt to reduce it to the mere continuation of a trend is an oversimplification of the highest order.
I’ve made specific statements about my beliefs for when Human-Level AI will be developed. If you disagree with these predictions, please state your own.
You’ve made statements about your willingness to bet at non-extreme odds over relatively large chunks of time. This indicates both low confidence and low granularity, which means that there’s very little disagreement to be had. (Of course, I don’t mean to imply that it’s possible to do better; indeed, given the current level of uncertainty surrounding everything to do with AI, about the only way to get me to disagree with you would have been to provide a highly confident, specific prediction.)
Nevertheless, it’s an indicator that you do not believe you possess particularly reliable information about future advances in AI, so I remain puzzled that you would present your thesis so strongly at the start. In particular, your claim that the following questions
Does this mean that the development of human-level AI might not surprise us? Or that by the time human level AI is developed it will already be old news?
depend on
whether or not you were surprised by the development of Alpha-Go
seems to have literally no connection to what you later claim, which is that AlphaGo did not surprise you because you knew something like it had to happen at some point. What is the relevant analogy here to artificial general intelligence? Will artificial general intelligence be “old news” because we suspected from the start that it was possible? If so, what does it mean for something be “old news” if you have no idea when it will happen, and could not have predicted it would happen at any particular point until after it showed up?
As far as I can tell, reading through both the initial post and the comments, none of these questions have been answered.
This seems to entirely ignore most (if not all) of the salient implications of AlphaGo’s development. What set AlphaGo apart from previous attempts at computer Go was the iterated distillation and amplification scheme employed during its training scheme. This represents a genuine conceptual advance over previous approaches, and to characterize it as simply a continuation of the trend of increasing strength in Go-playing programs only works if you neglect to define said “trend” in any way more specific than “roughly monotonically increasing”. And if you do that, you’ve tossed out any and all information that would make this a useful and non-vacuous observation.
Shortly after this paragraph, you write:
In other words, you got the easy and useless part (“will it happen?”) right, and the difficult and important part (“when will it happen?”) wrong. It’s not clear to me why you feel this necessitated mention at all, but since you did mention it, I feel obligated to point out that “predictions” of this caliber are the best you’ll ever be able to do if you insist on throwing out any information more specific and granular than “historically, these metrics seem to move consistently upward/downward”.
“Will it happen?” isn’t vacuous or easy, generally speaking. I can think of lots of questions where I have no idea what the answer is, despite a “trend of ever increasing strength”. For example:
Will Chess be solved?
Will faster than light travel be solved?
Will P=NP be solved?
Will the hard problem of consciousness be solved?
Will a Dyson sphere be constructed around Sol?
Will anthropogenic climate change cause Earth’s temperature to rise by 4C?
Will Earth’s population surpass 100 billion people?
Will the African Rhinoceros go extinct?
I’ve made specific statements about my beliefs for when Human-Level AI will be developed. If you disagree with these predictions, please state your own.
In the post, you write:
“Will it happen?” is easy precisely in cases where a development “seems inevitable”; the hard part then becomes forecasting when such a development will occur. The fact that you (and most computer Go experts, in fact) did not do this is a testament to how unpredictable conceptual advances are, and your attempt to reduce it to the mere continuation of a trend is an oversimplification of the highest order.
You’ve made statements about your willingness to bet at non-extreme odds over relatively large chunks of time. This indicates both low confidence and low granularity, which means that there’s very little disagreement to be had. (Of course, I don’t mean to imply that it’s possible to do better; indeed, given the current level of uncertainty surrounding everything to do with AI, about the only way to get me to disagree with you would have been to provide a highly confident, specific prediction.)
Nevertheless, it’s an indicator that you do not believe you possess particularly reliable information about future advances in AI, so I remain puzzled that you would present your thesis so strongly at the start. In particular, your claim that the following questions
depend on
seems to have literally no connection to what you later claim, which is that AlphaGo did not surprise you because you knew something like it had to happen at some point. What is the relevant analogy here to artificial general intelligence? Will artificial general intelligence be “old news” because we suspected from the start that it was possible? If so, what does it mean for something be “old news” if you have no idea when it will happen, and could not have predicted it would happen at any particular point until after it showed up?
As far as I can tell, reading through both the initial post and the comments, none of these questions have been answered.