Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
I personally suspect the ethics board exists for more prosaic reasons. Think “don’t bias the results of people’s medical advice searches to favor the products of pharmaceutical companies that pay you money” rather than “don’t eat the world”.
EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.
Update: “DeepMind reportedly insisted on the board’s establishment before reaching a deal.”
Update: DeepMind will work under Jeff Dean at Google’s search team.
And, predictably:
...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.
NYTimes also links to LessWrong.
Quote:
It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.
I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.
They actually link to LessWrong in the article, namely to my post here.
I personally suspect the ethics board exists for more prosaic reasons. Think “don’t bias the results of people’s medical advice searches to favor the products of pharmaceutical companies that pay you money” rather than “don’t eat the world”.
EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.