Interesting, my experience is roughly the opposite re Claude-3.7 vs the GPTs (no comment on Gemini, I’ve used it much less so far). Claude is my main workhorse; good at writing, good at coding, good at helping think things through. Anecdote: I had an interesting mini-research case yesterday (‘What has Trump II done that liberals are likely to be happiest about?’) where Claude did well albeit with some repetition and both o3 and o4-mini flopped. o3 was initially very skeptical that there was a second Trump term at all.
Hard to say if that’s different prompting, different preferences, or even chance variation, though.
Gemini seems to do a better job of shortening text while maintaining the nuance I expect grant reviewers to demand. Claude seems to focus entirely on shortening text. For context, I’m feeding a specific aims page for my PhD work that I’ve written about 15 drafts of already, so I have detailed implicit preferences about what is and is not an acceptable result.
Interesting, my experience is roughly the opposite re Claude-3.7 vs the GPTs (no comment on Gemini, I’ve used it much less so far). Claude is my main workhorse; good at writing, good at coding, good at helping think things through. Anecdote: I had an interesting mini-research case yesterday (‘What has Trump II done that liberals are likely to be happiest about?’) where Claude did well albeit with some repetition and both o3 and o4-mini flopped. o3 was initially very skeptical that there was a second Trump term at all.
Hard to say if that’s different prompting, different preferences, or even chance variation, though.
Gemini seems to do a better job of shortening text while maintaining the nuance I expect grant reviewers to demand. Claude seems to focus entirely on shortening text. For context, I’m feeding a specific aims page for my PhD work that I’ve written about 15 drafts of already, so I have detailed implicit preferences about what is and is not an acceptable result.