In order for humans to survive the AI transition I think we need to succeed on the technical problems of alignment (which are perhaps not as bad as Less Wrong culture made them out to be), and we also need to “land the plane” of superintelligent AI on a stable equilibrium where humans are still the primary beneficiaries of civilization, rather than a pest species to be exterminated or squatters to be evicted.
I agree completely with this.
I want to take the opportunity to elaborate a little on what a “stable equilibrium” civilisation should have, in my mind:
Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI—watermarking failed, so need new robust solutions with zero-knowledge proofs).
Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance. This corresponds to Layer 3 in Trust Over IP Stack. We at Gaia Consortium are doing this. Please join to help us!
The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through monitoring infra from item 2.)
I agree completely with this.
I want to take the opportunity to elaborate a little on what a “stable equilibrium” civilisation should have, in my mind:
Digital trust infrastructure: decentralised identity, secure communication (see Layers 1 and 2 in Trust Over IP Stack), proof-of-humanness, proof of AI (such as, a proof that such and such artifact is created with such and such agent, e.g., provided by OpenAI—watermarking failed, so need new robust solutions with zero-knowledge proofs).
Infrastructure for collective sensemaking and coordination: the infrastructure for communicating beliefs and counterfactuals, making commitments, imposing constraints on agent behaviour, and monitoring the compliance. This corresponds to Layer 3 in Trust Over IP Stack. We at Gaia Consortium are doing this. Please join to help us!
Infrastructure and systems for collective epistemics: next-generation social networks (e.g., https://subconscious.network/), media, content authenticity, Jim Rutt’s “info agents” (he advises “three different projects that are working on this”).
The science/ethics of consciousness and suffering mostly solved, and much more effort in biology to understand whom (or whose existence, joy, or non-suffering) the civilisation should value, to better inform the constraints and policy for the economic agents (which is monitored and verified through monitoring infra from item 2.)
Systems for political decision-making and collective ethical deliberation: see Collective Intelligence Project, Policy Synth, simulated deliberative democracy. These types of systems should also be used for governing all of the above layers.
Did I forget something important? Comments and criticism are welcome.