Eliezed specifically mentioned Google in his Intelligence Explosion Microeconomics paper as the only named organization that could potentially start an intelligence explosion.
Larry Page has publicly said that he is specifically interested in “real AI” (Artificial General Intelligence), and some of the researchers in the field are funded by Google. So far as I know, this is still at the level of blue-sky work on basic algorithms and not an attempt to birth The Google in the next five years, but it still seems worth mentioning Google specifically.
IntheseinterviewsLarry Pagegave years ago he constantly said that he wanted Google to become “the ultimate search engine” that would be able to understand all the information in the world. And to do that, Larry Page said, it would need to be ‘true’ artificial intelligence (he didn’t say ‘true’, but it comes clear what he means in the context).
We have some people at Google who are really trying to build artificial intelligence and to do it on a large scale and so on, and in fact, to make search better, to do the perfect job of search you could ask any query and it would give you the perfect answer and that would be artificial intelligence based on everything being on the web, which is a pretty close approximation. We’re lucky enough to be working incrementally closer to that, but again, very, very few people are working on this, and I don’t think it’s as far off as people think.
I doubt it would be very Friendly if you use MIRI’s definition, but it doesn’t seem like they have something ‘evil’ in their mind. Peter Norvig is the co-author of AI: A Modern Approach, which is currently the dominant textbook in the field. The 3rd edition had several mentions about AGI and Friendly AI. So at least some people in Google have heard about this Friendliness thing and paid attention to it. But the projects run by Google X are quite secretive, so it’s hard to know exactly how seriously they take the dangers of AGI and how much effort they put into these matters. It could be, like lukeprog said in October 2012, that Google doesn’t even have “an AGI team”.
It could be, like lukeprog said in October 2012, that Google doesn’t even have “an AGI team”.
Not that I know of, anyway. Kurzweil’s team is probably part of Page’s long-term AGI ambitions, but right now they’re focusing on NLP (last I heard). And Deep Mind, which also has long-term AGI ambitions, has been working on game AI as an intermediate step. But then again, that kind of work is probably more relevant progress toward AGI than, say, OpenCog.
IIRC the Deep Mind folks were considering setting up an ethics board before Google acquired them, so the Google ethics board may be a carryover from that. FHI spoke to Deep Mind about safety standards a while back, so they’re not totally closed to taking Friendliness seriously. I haven’t spoken to the ethics board, so I don’t know how serious they are.
Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
I personally suspect the ethics board exists for more prosaic reasons. Think “don’t bias the results of people’s medical advice searches to favor the products of pharmaceutical companies that pay you money” rather than “don’t eat the world”.
EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.
So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.
Except that they are also blatant ideological Singulatarians, so they’re working to close that gap.
Eliezed specifically mentioned Google in his Intelligence Explosion Microeconomics paper as the only named organization that could potentially start an intelligence explosion.
In these interviews Larry Page gave years ago he constantly said that he wanted Google to become “the ultimate search engine” that would be able to understand all the information in the world. And to do that, Larry Page said, it would need to be ‘true’ artificial intelligence (he didn’t say ‘true’, but it comes clear what he means in the context).
Here’s a quote by Larry Page from the year 2007:
I doubt it would be very Friendly if you use MIRI’s definition, but it doesn’t seem like they have something ‘evil’ in their mind. Peter Norvig is the co-author of AI: A Modern Approach, which is currently the dominant textbook in the field. The 3rd edition had several mentions about AGI and Friendly AI. So at least some people in Google have heard about this Friendliness thing and paid attention to it. But the projects run by Google X are quite secretive, so it’s hard to know exactly how seriously they take the dangers of AGI and how much effort they put into these matters. It could be, like lukeprog said in October 2012, that Google doesn’t even have “an AGI team”.
Not that I know of, anyway. Kurzweil’s team is probably part of Page’s long-term AGI ambitions, but right now they’re focusing on NLP (last I heard). And Deep Mind, which also has long-term AGI ambitions, has been working on game AI as an intermediate step. But then again, that kind of work is probably more relevant progress toward AGI than, say, OpenCog.
IIRC the Deep Mind folks were considering setting up an ethics board before Google acquired them, so the Google ethics board may be a carryover from that. FHI spoke to Deep Mind about safety standards a while back, so they’re not totally closed to taking Friendliness seriously. I haven’t spoken to the ethics board, so I don’t know how serious they are.
Update: “DeepMind reportedly insisted on the board’s establishment before reaching a deal.”
Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
NYTimes also links to LessWrong.
Quote:
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
They actually link to LessWrong in the article, namely to my post here.
I personally suspect the ethics board exists for more prosaic reasons. Think “don’t bias the results of people’s medical advice searches to favor the products of pharmaceutical companies that pay you money” rather than “don’t eat the world”.
EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.
So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.
Except that they are also blatant ideological Singulatarians, so they’re working to close that gap.