Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.