Intelligence amplification technology is widespread, preventing any differential adoption by the FAI team. However, FAI researchers are able to keep up with competing efforts to use that technology for AI research.
If the “FAI is important” position is correct, but requires intelligence to understand, would widespread IA cause more people to become interested in working on FAI?
Yeah, I’ve heard that argument before. The idea is that intelligence not only makes you better at stuff, but also impacts how you make decisions about what to work on.
The alternate hypothesis is that intelligence-amplified people would just get better at being crazy. Perhaps one could start to tease apart the hypotheses by distinguishing ‘intelligence’ from ‘reflectiveness’ and ‘altruism’, and trying to establish how those quantities interact.
Related point: High-IQ folks are more likely to cooperate in the prisoner’s dilemna. (See Section 3 of this article.) Which suggests that they’d be more inclined to do altruistic stuff like make an AI that’s friendly to everyone rather than an AI that serves their individual wishes.
If the “FAI is important” position is correct, but requires intelligence to understand, would widespread IA cause more people to become interested in working on FAI?
Yeah, I’ve heard that argument before. The idea is that intelligence not only makes you better at stuff, but also impacts how you make decisions about what to work on.
The alternate hypothesis is that intelligence-amplified people would just get better at being crazy. Perhaps one could start to tease apart the hypotheses by distinguishing ‘intelligence’ from ‘reflectiveness’ and ‘altruism’, and trying to establish how those quantities interact.
Related point: High-IQ folks are more likely to cooperate in the prisoner’s dilemna. (See Section 3 of this article.) Which suggests that they’d be more inclined to do altruistic stuff like make an AI that’s friendly to everyone rather than an AI that serves their individual wishes.