I am also sympathetic to the higher level point. If we have AIs sufficiently aligned that we can make them follow the OpenAI model spec, I’d be at least somewhat surprised if governments and AI companies didn’t find a way to craft model specs that avoided the problems described in this post.
I agree that it is unclear whether this would go the way I hypothesized here. But I think it’s arguably the straightforward extrapolation of how America handles this kind of problem: are you surprised that America allows parents to homeschool their children?
Note also that AI companies can compete on how user-validating the models are, and in absence of some kind of regulation, people will be able to pay for models that have the properties they want.
I agree that it is unclear whether this would go the way I hypothesized here. But I think it’s arguably the straightforward extrapolation of how America handles this kind of problem: are you surprised that America allows parents to homeschool their children?
Note also that AI companies can compete on how user-validating the models are, and in absence of some kind of regulation, people will be able to pay for models that have the properties they want.
No, but I’d be surprised if it were very widespread and resulted in the sort of fractures described in this post.