You could have the view that open weights AGI is too costly on takeover risk and escape is bad, but we’ll hopefully have some pre-AGI AIs which do strange misaligned behaviors that don’t really get them much/any influence/power. If this is the view, then it really feels to me like preventing escape/rogue internal deployment is pretty useful.
You could have the view that open weights AGI is too costly on takeover risk and escape is bad, but we’ll hopefully have some pre-AGI AIs which do strange misaligned behaviors that don’t really get them much/any influence/power. If this is the view, then it really feels to me like preventing escape/rogue internal deployment is pretty useful.