So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.
Except that they are also blatant ideological Singulatarians, so they’re working to close that gap.
So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.
Except that they are also blatant ideological Singulatarians, so they’re working to close that gap.