[Question] How useful could stolen AI model weights be without knowing the architecture and activation functions?

I’m thinking of an unreleased frontier model. No public information. How realistic is it to think such a model could be duplicated starting from the weights alone, e.g. by brute forcing through different combinations of architecture and activation functions? Would thieves be likely to end up with an inferior bizarro model?