Weather Control:
In 1966, a radio documentary, 2000 AD, was aired as a forum for various media and science personalities to discuss what life might be like in the year 2000. The primary theme running through the show concerned a prediction that no one in the year 2000 would have to work more than a day or two a week, and our leisure time would go through the roof. With so much free time, you can imagine that we would not want our vacations or day trips ruined by nasty weather, and therefore we should quickly develop a way to control the weather, shaping it to our needs. Taking the lighting from the clouds or the wind from the tornadoes were among the predictions, yet they were careful to note that we might not take weather control too far because of political reasons. Unfortunately, we here in the 2000’s still work full weeks, and we still get our picnics rained out from time to time.
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could “unwind” storms and/or redirect weather masses).
perhaps this is a poor place to begin this, but I’ll propose a couple of things I would think count as milestones toward a theory of AGI.
AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn’t awful though I’m not sure)
parsing natural language queries (Watson is a huge accomplishment in this direction)
systems which reference different subroutines as appropriate (this is present in any OS I’m sure) and which are modular in their subroutines
automated search for new appropriate subroutine (something like, if I get a new iphone and say “start a game of words with friends with danny” the phone automatically downloads the words with friends app and searches for danny’s profile—I don’t think this exists at present but it seems realistic soon)
emulation of living beings (i.e. a way of parsing a computation so that it behaves exactly like, for starters, C. Elegans; then more complex beings)
AI that can learn “practical” skills (i.e. AIXI learning chess against itself)
robotics that can accomplish practical tasks like all-terrain maneuvering and fine manipulation (existent)
AI that can learn novel skills (i.e. AIXI learning chess by being placed in a “chess environment” rather than having the rules explained to it)
Good emulation or API reverse engineering (like WINE) and especially theoretical results about reverse engineering
automated bug fixing programs (I don’t program enough to know how good debugging tools are)
chatbots winning at Turing tests (iirc there are competitions and humans do not always shut out the chat bots)
These all seem like practical steps which would make me think that AGI was nearer; many of them have come to pass in the past decade, very few came before that, some seem close, some seem far away but achievable. There are certainly many more although I would guess many would be technical and I’m not sufficiently expert to provide opinions.
I’d also make the distinction that the weather machine claim relied on the social structural claim that people would only work a day or two a week; social structures notoriously change much slower and no such assumption is necessary for AI to be studied.
AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn’t awful though I’m not sure)
Here is somecomputer-generatedmusic. I don’t have particularly refined taste, but I enjoy it. Note: the first link with all the short MP3s is from an earlier version of the program, which was intended only to imitate other composers.
My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier.
My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a “wild guess”.
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could “unwind” storms and/or redirect weather masses).
Such equipment, however, would have to have access to more power than our civilization currently generates. So while it may be more of an engineering problem than a theoretical one, I believe that AGI is more accessible.
A “scientific” prediction with a time-frame of several decades and no clear milestones along the way is equivalent to a wild guess. From 20 Predictions of the Future (We’re Still Waiting For):
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could “unwind” storms and/or redirect weather masses).
perhaps this is a poor place to begin this, but I’ll propose a couple of things I would think count as milestones toward a theory of AGI.
AIs producing original, valuable work (automated proof systems are an example; I believe there is algorithmically generated music as well that isn’t awful though I’m not sure)
parsing natural language queries (Watson is a huge accomplishment in this direction)
systems which reference different subroutines as appropriate (this is present in any OS I’m sure) and which are modular in their subroutines
automated search for new appropriate subroutine (something like, if I get a new iphone and say “start a game of words with friends with danny” the phone automatically downloads the words with friends app and searches for danny’s profile—I don’t think this exists at present but it seems realistic soon)
emulation of living beings (i.e. a way of parsing a computation so that it behaves exactly like, for starters, C. Elegans; then more complex beings)
AI that can learn “practical” skills (i.e. AIXI learning chess against itself)
robotics that can accomplish practical tasks like all-terrain maneuvering and fine manipulation (existent)
AI that can learn novel skills (i.e. AIXI learning chess by being placed in a “chess environment” rather than having the rules explained to it)
Good emulation or API reverse engineering (like WINE) and especially theoretical results about reverse engineering
automated bug fixing programs (I don’t program enough to know how good debugging tools are)
chatbots winning at Turing tests (iirc there are competitions and humans do not always shut out the chat bots)
These all seem like practical steps which would make me think that AGI was nearer; many of them have come to pass in the past decade, very few came before that, some seem close, some seem far away but achievable. There are certainly many more although I would guess many would be technical and I’m not sufficiently expert to provide opinions.
I’d also make the distinction that the weather machine claim relied on the social structural claim that people would only work a day or two a week; social structures notoriously change much slower and no such assumption is necessary for AI to be studied.
Here is some computer-generated music. I don’t have particularly refined taste, but I enjoy it. Note: the first link with all the short MP3s is from an earlier version of the program, which was intended only to imitate other composers.
My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier.
My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a “wild guess”.
If the “seed AI” idea is right, this claim can’t be taken for granted, especially if there’s no optimization for Friendliness.
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
I agree.
Yay!
Such equipment, however, would have to have access to more power than our civilization currently generates. So while it may be more of an engineering problem than a theoretical one, I believe that AGI is more accessible.