On the neuroscience side, I’ve been trying to dive into “if we build AGI using similar algorithms as the human brain, how can we make it safe and beneficial?” Further reading. That’s more studying algorithms than “studying humans”, probably.
I guess these are all more on the strategy side, but...
Out of possible futures in which we’ve invented cheap superintelligent AGIs, do a survey of which one most people on earth would actually want to live in. How does it interact with different personalities and different value systems? Further reading.
If everyone on Earth had a superintelligent AGI helper, what would they do with it, and what would be the societal consequences? What if each person can buy an AGI whose capabilities are proportional to the amount of money they spend on its hardware?
How can we avoid the failure mode (assuming it is in fact a failure mode) where we solve the technical problem of making AGIs that are docile and subservient, but then there’s a political movement of people identifying with those AGIs and lobbying to make them more selfish and independent, presumably citing the analogy of slavery? What sort of AGI, and AGI-human interaction framework, would make this more or less likely to happen? Further reading.
Great list!!
On the neuroscience side, I’ve been trying to dive into “if we build AGI using similar algorithms as the human brain, how can we make it safe and beneficial?” Further reading. That’s more studying algorithms than “studying humans”, probably.
I guess these are all more on the strategy side, but...
Out of possible futures in which we’ve invented cheap superintelligent AGIs, do a survey of which one most people on earth would actually want to live in. How does it interact with different personalities and different value systems? Further reading.
If everyone on Earth had a superintelligent AGI helper, what would they do with it, and what would be the societal consequences? What if each person can buy an AGI whose capabilities are proportional to the amount of money they spend on its hardware?
How can we avoid the failure mode (assuming it is in fact a failure mode) where we solve the technical problem of making AGIs that are docile and subservient, but then there’s a political movement of people identifying with those AGIs and lobbying to make them more selfish and independent, presumably citing the analogy of slavery? What sort of AGI, and AGI-human interaction framework, would make this more or less likely to happen? Further reading.