DeepMind isn’t doing safety engineering; they’re doing standard AI. It doesn’t matter if Elon Musk is interested in AI safety, if, after his deliberations, he invests in efforts to develop unsafe AI. Good intentions don’t leak value into the consequences of your acts.
He said the investments were to “keep an eye on what’s going on with artificial intelligence.” I’m not sure how investments help with that, but perhaps DeepMind and Vicarious are willing to give certain information to people who invest in them that they wouldn’t give otherwise?
Right, but it’s still good news, as it pushes the conversation from discussing whether or not AI is dangerous to discussing precisely the best organization to prevent unsafe AI. Right now, a report by MIRI on the specifics of MIRI vs DeepMind/Vicarious, if it magically came across Musk’s desk, would have a chance of doing good. Before, it wouldn’t. That’s progress.
DeepMind isn’t doing safety engineering; they’re doing standard AI. It doesn’t matter if Elon Musk is interested in AI safety, if, after his deliberations, he invests in efforts to develop unsafe AI. Good intentions don’t leak value into the consequences of your acts.
He said the investments were to “keep an eye on what’s going on with artificial intelligence.” I’m not sure how investments help with that, but perhaps DeepMind and Vicarious are willing to give certain information to people who invest in them that they wouldn’t give otherwise?
Right, but it’s still good news, as it pushes the conversation from discussing whether or not AI is dangerous to discussing precisely the best organization to prevent unsafe AI. Right now, a report by MIRI on the specifics of MIRI vs DeepMind/Vicarious, if it magically came across Musk’s desk, would have a chance of doing good. Before, it wouldn’t. That’s progress.