I don’t understand your point about asymmetry. Doesn’t that tend to make the default course bad?
What I meant was, imagine two worlds:
Individual Control, where AI developers vary wildly in their approach to risk, safety, and deployment
Diffused Control, where AI developers tend to take similar approaches to risk, safety, and deployment
If in scenario A risk-reducing actions reduce risk as much as risk-increasing actions increase risk (i.e., payoffs are symmetrical), then these two worlds have identical risk. But if in scenario B payoffs are symmetrical (i.e., these companies are more able to increase risk than they are to decrease risks), then the Diffused Control world has lower overall risk. A single reckless outlier can dominate the outcome, and reckless outliers are more likely in the Individual Control world.
Does that make the default course bad? I guess so. But if it is true, it implies that having AI developers controlled by individuals is worse than having them run by committee.
What I meant was, imagine two worlds:
Individual Control, where AI developers vary wildly in their approach to risk, safety, and deployment
Diffused Control, where AI developers tend to take similar approaches to risk, safety, and deployment
If in scenario A risk-reducing actions reduce risk as much as risk-increasing actions increase risk (i.e., payoffs are symmetrical), then these two worlds have identical risk. But if in scenario B payoffs are symmetrical (i.e., these companies are more able to increase risk than they are to decrease risks), then the Diffused Control world has lower overall risk. A single reckless outlier can dominate the outcome, and reckless outliers are more likely in the Individual Control world.
Does that make the default course bad? I guess so. But if it is true, it implies that having AI developers controlled by individuals is worse than having them run by committee.