Curated! Delighted to get a thoughtful new paper from Nick Bostrom, and I’m very intrigued by the relative pragmatic proposal here. There are many details that seem worthwhile, such as avoiding an AGI project primarily dictated by market forces, and also avoiding an AGI project primarily dictated by national militaristic forces, through having a company that intentionally distributes its voting shares in a rough heuristic approximation of a global democracy.
This seems to me quickly a plausible candidate for a new worthwhile direction in which to apply optimization pressure upon the world; though I cannot quite tell how such an improvement (if indeed it is a major improvement) trades off against work to simply cease global AGI development. I am curating in the strong hope that more people will engage with the details, propose amendments and changes, or offer arguments for why this direction is not workable or desirable.
For instance I quite appreciated Wei Dai’s comment; I too feel that it’s been very disheartening to see so many people get rich while selling out humanity’s existential fate. I am unclear exactly what makes sense here, but I think it plausible that in choosing to be part of the leadership of such an AGI project one should permanently give up the possibility of ever becoming wealthy on the scale of more than a few million dollars, in order to remove the personal incentives upon you. I think the main counterargument here is that if one is not independently quite powerful (i.e. wealthy) then one may be targeted by forces that have far more power (i.e. funds) than you, and you will become controlled by them. I’m not sure how these considerations balance out.
Overall, very exciting, I look forward to thinking about this proposal more.
Curated! Delighted to get a thoughtful new paper from Nick Bostrom, and I’m very intrigued by the relative pragmatic proposal here. There are many details that seem worthwhile, such as avoiding an AGI project primarily dictated by market forces, and also avoiding an AGI project primarily dictated by national militaristic forces, through having a company that intentionally distributes its voting shares in a rough heuristic approximation of a global democracy.
This seems to me quickly a plausible candidate for a new worthwhile direction in which to apply optimization pressure upon the world; though I cannot quite tell how such an improvement (if indeed it is a major improvement) trades off against work to simply cease global AGI development. I am curating in the strong hope that more people will engage with the details, propose amendments and changes, or offer arguments for why this direction is not workable or desirable.
For instance I quite appreciated Wei Dai’s comment; I too feel that it’s been very disheartening to see so many people get rich while selling out humanity’s existential fate. I am unclear exactly what makes sense here, but I think it plausible that in choosing to be part of the leadership of such an AGI project one should permanently give up the possibility of ever becoming wealthy on the scale of more than a few million dollars, in order to remove the personal incentives upon you. I think the main counterargument here is that if one is not independently quite powerful (i.e. wealthy) then one may be targeted by forces that have far more power (i.e. funds) than you, and you will become controlled by them. I’m not sure how these considerations balance out.
Overall, very exciting, I look forward to thinking about this proposal more.