My current guess for least worst path of ASI development that’s not crazy unrealistic:
open source development + complete surveillance of all citizens and all elites (everyone’s cameras broadcast to the public) + two tier voting.
Two tier voting:
countries’s govts vote or otherwise agree at global level on a daily basis what the rate of AI progress should be and which types of AI usage are allowed. (This rate can be zero.)
All democratic countries use daily internet voting (liquid democracy) to decide what stance to represent at the global level. All other countries can use whatever internal method they prefer, to decide their stance at the global level.
(All ASI labs are assumed to be property of their respective national govts. An ASI lab misbehaving is its govt’s responsibility.) Any country whose ASI labs refuse to accept results of global vote and accelerate faster risks war (including nuclear war or war using hypothetical future weapons). Any country whose ASI labs refuse to broadcast themselves on live video risks war. Any country’s govt that refuses to let their citizens broadcast live video risks war. Any country whose citizens mostly refuse to broadcast themselves on live video risks war. The exact thresholds for how much violation leads to how much escalation of war, may ultimately depend on how powerful the AI is. The more powerful the AI is (especially for offence not defence), the more quickly other countries must be willing to escalate to nuclear war in response to a violation.
Open source development
All people working at ASI labs are livestream broadcast to public 24x7x365. Any AI advances made must be immediately proliferated to every single person on Earth who can afford a computer. Some citizens will be able to spend more on inference than others, but everyone should have the AI weights on their personal computer.
This means bioweapons, nanotech weapons and any other weapons invented by the AI are also immediately proliferated to everyone on Earth. So this setup necessarily has to be paired with complete surveillance of everyone. People will all broadcast their cameras in public. Anyone who refuses can be arrested or killed via legal or extra-legal means.
Since everyone knows all AI advances will be proliferated immediately, they will also use this knowledge to vote on what the global rate of progress should be.
There are plenty of ways this plan can fail and I haven’t thought through all of them. But this is my current guess.
complete surveillance of all citizens and all elites
Certainly at a human level this is unrealistic. In a way it’s also overkill—if use of an AI is an essential step towards doing anything dangerous, the “surveillance” can just be of what AIs are doing or thinking.
This assumes that you can tell whether an AI input or output is dangerous. But the same thing applies to video surveillance—if you can’t tell whether a person is brewing something harmless or harmful, having a video camera in their kitchen is no use.
At a posthuman level, mere video surveillance actually does not go far enough, again because a smart deceiver can carry out their dastardly plots in a way that isn’t evident until it’s too late. For a transhuman civilization that has values to preserve, I see no alternative to enforcing that every entity above a certain level of intelligence (basically, smart enough to be dangerous) is also internally aligned, so that there is no disposition to hatch dastardly plots in the first place.
This may sound totalitarian, but it’s not that different to what humanity attempts to instill in the course of raising children and via education and culture. We have law to deter and punish transgressors, but we also have these developmental feedbacks that are intended to create moral, responsible adults that don’t have such inclinations, or that at least restrain themselves.
In a civilization where it is theoretically possible to create a mind with any set of dispositions at all, from paperclip maximizer to rationalist bodhisattva, the “developmental feedbacks” need to extend more deeply into the processes that design and create possible minds, than they do in a merely human civilization.
My current guess for least worst path of ASI development that’s not crazy unrealistic:
open source development + complete surveillance of all citizens and all elites (everyone’s cameras broadcast to the public) + two tier voting.
Two tier voting:
countries’s govts vote or otherwise agree at global level on a daily basis what the rate of AI progress should be and which types of AI usage are allowed. (This rate can be zero.)
All democratic countries use daily internet voting (liquid democracy) to decide what stance to represent at the global level. All other countries can use whatever internal method they prefer, to decide their stance at the global level.
(All ASI labs are assumed to be property of their respective national govts. An ASI lab misbehaving is its govt’s responsibility.) Any country whose ASI labs refuse to accept results of global vote and accelerate faster risks war (including nuclear war or war using hypothetical future weapons). Any country whose ASI labs refuse to broadcast themselves on live video risks war. Any country’s govt that refuses to let their citizens broadcast live video risks war. Any country whose citizens mostly refuse to broadcast themselves on live video risks war. The exact thresholds for how much violation leads to how much escalation of war, may ultimately depend on how powerful the AI is. The more powerful the AI is (especially for offence not defence), the more quickly other countries must be willing to escalate to nuclear war in response to a violation.
Open source development
All people working at ASI labs are livestream broadcast to public 24x7x365. Any AI advances made must be immediately proliferated to every single person on Earth who can afford a computer. Some citizens will be able to spend more on inference than others, but everyone should have the AI weights on their personal computer.
This means bioweapons, nanotech weapons and any other weapons invented by the AI are also immediately proliferated to everyone on Earth. So this setup necessarily has to be paired with complete surveillance of everyone. People will all broadcast their cameras in public. Anyone who refuses can be arrested or killed via legal or extra-legal means.
Since everyone knows all AI advances will be proliferated immediately, they will also use this knowledge to vote on what the global rate of progress should be.
There are plenty of ways this plan can fail and I haven’t thought through all of them. But this is my current guess.
Certainly at a human level this is unrealistic. In a way it’s also overkill—if use of an AI is an essential step towards doing anything dangerous, the “surveillance” can just be of what AIs are doing or thinking.
This assumes that you can tell whether an AI input or output is dangerous. But the same thing applies to video surveillance—if you can’t tell whether a person is brewing something harmless or harmful, having a video camera in their kitchen is no use.
At a posthuman level, mere video surveillance actually does not go far enough, again because a smart deceiver can carry out their dastardly plots in a way that isn’t evident until it’s too late. For a transhuman civilization that has values to preserve, I see no alternative to enforcing that every entity above a certain level of intelligence (basically, smart enough to be dangerous) is also internally aligned, so that there is no disposition to hatch dastardly plots in the first place.
This may sound totalitarian, but it’s not that different to what humanity attempts to instill in the course of raising children and via education and culture. We have law to deter and punish transgressors, but we also have these developmental feedbacks that are intended to create moral, responsible adults that don’t have such inclinations, or that at least restrain themselves.
In a civilization where it is theoretically possible to create a mind with any set of dispositions at all, from paperclip maximizer to rationalist bodhisattva, the “developmental feedbacks” need to extend more deeply into the processes that design and create possible minds, than they do in a merely human civilization.