[Question] AI Alignment, Constraints, Control, Incentives or Partnership?

A quick search on AI benevolence did not really return much and I’ve not waded into the depths of the whole area. However I am wondering to what extent the current approach here is about constraining and controlling (call that bucket 1) versus that of incenting and partnership (Bucket 2) as a solution to the general fears of AGI.

If one were to toss the approaches into one of the buckets what percentage would be in each of the buckets? I get the impression most of what I’ve seen seems more bucket 1 type solutions.

No comments.