It sets AGI-minded programmers (under circumstances expected to yield UFAI) onto tasks that would not be expected to result in AGI of any sort (driving)
I get that part. Is there some reason I’m missing as to why Google wouldn’t utilize the talent at DeepMind to pursue AGI-relevant projects?
I mean, Google has great resources (much more than MIRI or anyone else) and a proven record of success at being instrumentally rational in the techincal/programming arena (i.e. winning on a grand scale for a length of time). They are adding folks who, from what I read on LW, actually understand AGI’s complexity, implications, etc.
Can you elaborate?
It sets AGI-minded programmers (under circumstances expected to yield UFAI) onto tasks that would not be expected to result in AGI of any sort (driving)
I get that part. Is there some reason I’m missing as to why Google wouldn’t utilize the talent at DeepMind to pursue AGI-relevant projects?
I mean, Google has great resources (much more than MIRI or anyone else) and a proven record of success at being instrumentally rational in the techincal/programming arena (i.e. winning on a grand scale for a length of time). They are adding folks who, from what I read on LW, actually understand AGI’s complexity, implications, etc.
Just nervousness about UFAI.
This analysis seems to be based on AGI-mindedness being an inherent property of programmers, and not a response to market forces.
No… not at all! Quite the opposite, in fact. If it were inherent, then moving them away from it would be ineffective.