Really enjoyed your essay series. Appreciated it offered a positive future vision and then a roadmap for how to get there. Both are important. Too many people seem to be sleepwalking into a sketchy AGI future.
Here’s my vision from a 2022 Future of Life Institute contest: “A future where sentient beings thrive due to widespread agreement on core values; improvements in education; Personal Agent AIs; social simulations; and updated legal systems (based on the core values) that are fair, nimble, and capable of controlling dangerous humans and AGIs. Of the core values, Truth and Civility are particularly impactful in keeping the world moving in a positive direction.” Full scenario here.
Compare with yours:
We want to live in a world where:
Humans can create economic value for themselves and can disrupt existing elites well after AGI.
Everyone has an unprecedentedly high standard of living, both to meet their needs and to keep money flowing in the human economy.
No single actor or oligarchy—whether that be governments, companies, or a handful of individuals—monopolizes AGI. By extension, no single actor monopolizes power.
Regular people are in control of their destiny. We hold as a self-evident truth that humans should be the masters of their own futures.
Close enough.
Reflections and findings about the FLI contest are here.
Thoughts on Averting the Intelligence Curse via AI Safety via Law here.
Thoughts on Diffusing and Democratizing AI through next-generation virtual assistants (Personal Agents) here.
Anthony Aguirre’s argument for pursuing narrow(er) AI over AGI here.
I have already proposed the following radical solution to all problems related to the Intelligence Curse: have the AGI aligned to a certain treaty that requires the AGI, instead of obeying all orders except for the ones determined by the Spec, to harvest at most a certain share of resources and to help humans only in certain ways[1] that amplify humanity and don’t cause it to degrade, like teaching humans about the facts that mankind has already discovered or pointing out mistakes in humans’ works. Or protecting mankind from some other existential risks that are hard to deal with, like a nuclear war that might be caused by an accident.
It also seems to me that this type of alignment might actually be even easier to generalize to AGI than the ones causing the Curse. Or, even more radically, the types of alignment that cause the Curse might be totally impossible to achieve, but can be faked, as done by Agent-5 in the race ending of the AI-2027 forecast.
I think claiming the above is a “radical solution to all problems related to the Intelligence Curse” is an overstatement. The three treaty elements you mention could be useful as part of AI-human social contracts—thus getting at a part of the Averting (i.e, AI Safety) piece . But many more treaty elements (Laws, Rules) are also needed IMO.
The Diffusing and Democratizing (and maybe other) pieces are also needed for an effective solution.
(Also, unclear what you mean by “obeying all orders except the ones determined by the Spec.” What Spec?)
Now that I can answer, I will: if the ASI is ONLY willing to teach humans facts that other humans have discovered and not to do other work for them, then the ASI won’t replace any other people whose work requires education. The Intelligence Curse is thus prevented.
Really enjoyed your essay series. Appreciated it offered a positive future vision and then a roadmap for how to get there. Both are important. Too many people seem to be sleepwalking into a sketchy AGI future.
Here’s my vision from a 2022 Future of Life Institute contest: “A future where sentient beings thrive due to widespread agreement on core values; improvements in education; Personal Agent AIs; social simulations; and updated legal systems (based on the core values) that are fair, nimble, and capable of controlling dangerous humans and AGIs. Of the core values, Truth and Civility are particularly impactful in keeping the world moving in a positive direction.” Full scenario here.
Compare with yours:
Close enough.
Reflections and findings about the FLI contest are here.
Thoughts on Averting the Intelligence Curse via AI Safety via Law here.
Thoughts on Diffusing and Democratizing AI through next-generation virtual assistants (Personal Agents) here.
Anthony Aguirre’s argument for pursuing narrow(er) AI over AGI here.
Hopefully something of interest.
I have already proposed the following radical solution to all problems related to the Intelligence Curse: have the AGI aligned to a certain treaty that requires the AGI, instead of obeying all orders except for the ones determined by the Spec, to harvest at most a certain share of resources and to help humans only in certain ways[1] that amplify humanity and don’t cause it to degrade, like teaching humans about the facts that mankind has already discovered or pointing out mistakes in humans’ works. Or protecting mankind from some other existential risks that are hard to deal with, like a nuclear war that might be caused by an accident.
It also seems to me that this type of alignment might actually be even easier to generalize to AGI than the ones causing the Curse. Or, even more radically, the types of alignment that cause the Curse might be totally impossible to achieve, but can be faked, as done by Agent-5 in the race ending of the AI-2027 forecast.
UPD: a prompt by Ashutosh Shrivastava with a similar premise is mentioned in AI overview #114.
I think claiming the above is a “radical solution to all problems related to the Intelligence Curse” is an overstatement. The three treaty elements you mention could be useful as part of AI-human social contracts—thus getting at a part of the Averting (i.e, AI Safety) piece . But many more treaty elements (Laws, Rules) are also needed IMO.
The Diffusing and Democratizing (and maybe other) pieces are also needed for an effective solution.
(Also, unclear what you mean by “obeying all orders except the ones determined by the Spec.” What Spec?)
Now that I can answer, I will: if the ASI is ONLY willing to teach humans facts that other humans have discovered and not to do other work for them, then the ASI won’t replace any other people whose work requires education. The Intelligence Curse is thus prevented.