It’s an interesting concept that some AI labs are playing around with. GLM-4.7 I believe does this process within it’s <think> tags; you’ll see it draft a response first, critique the draft, and then output an adjusted response. I frankly haven’t played around with GLM-4.7 enough to know if it’s actually more effective in practice, but I do like the idea.
However, I personally find more value in getting a second opinion from a different model architecture entirely and then using both assessments to make an informed decision. I suppose it all comes down to particular use case; there are upsides and downsides to both.
It’s an interesting concept that some AI labs are playing around with. GLM-4.7 I believe does this process within it’s <think> tags; you’ll see it draft a response first, critique the draft, and then output an adjusted response. I frankly haven’t played around with GLM-4.7 enough to know if it’s actually more effective in practice, but I do like the idea.
However, I personally find more value in getting a second opinion from a different model architecture entirely and then using both assessments to make an informed decision. I suppose it all comes down to particular use case; there are upsides and downsides to both.