I very much hope that there are also doctors, nurses, administrators, and other relevant roles on that team. If not, or really regardless, any tool selection should involve a pilot process, and side-by-side comparisons of results from several options using known past, present, and expected future use cases. The outputs also should be evaluated independently by multiple people with different backgrounds and roles.
For some things it will. But for some things—tools coded as ‘research support’ or ‘point of care reference tools’ or more generally, an information resource—it’s up to the library, just like we make the decisions about what journals we’re subscribed to. I gather that before I started, there used to be more in the way of meaningful consultation with people in other roles—but as our staffing has been axed, these sorts of outreach relationships have fallen to the wayside.
I’m going to assume the tools you’re considering are healthcare-specific and advertise themselves as being compliant with any relevant UK laws. If so, what do the providers claim about how they can and should be used, and how they shouldn’t? Do the pilot results bear that out? If not, then you really do need to understand how the tools work, what data goes where, and the like.
It would be great if that were a reasonable assumption. Every one I’ve evaluated so far has turned out to be some kind of ChatGPT with a medical-academic research bow on it. Some of them are restricted to a walled garden of trusted medical sources instead of having the internet.
Part of the message I think I oughta promote is that we should hold out for something specific. The issue is that when it comes to research, it really is up to people what they use—there’s no real oversight and there’s not regulations to stop them like if they were actually provably putting patient info in there. But they’re still going to be bringing what they “learn” into practice, as well as polluting the commons (since we know at this point that peer review doesn’t do much and it’s mostly peoples’ academic integrity keeping it all from falling apart).
Part of what these companies with their GPTs are trying to sell themselves as being able to replace is the exact sort of checks and balances that stops the whole medical research commons being nothing but bullshit—critical appraisal and evidence synthesis.
I suspect, though I don’t know, that the ceiling of what results a skilled user can achieve using a frontier LLM is probably higher than what most dedicated healthcare-focused tools can do, but the floor is very likely to be much, much worse.
For some things it will. But for some things—tools coded as ‘research support’ or ‘point of care reference tools’ or more generally, an information resource—it’s up to the library, just like we make the decisions about what journals we’re subscribed to. I gather that before I started, there used to be more in the way of meaningful consultation with people in other roles—but as our staffing has been axed, these sorts of outreach relationships have fallen to the wayside.
It would be great if that were a reasonable assumption. Every one I’ve evaluated so far has turned out to be some kind of ChatGPT with a medical-academic research bow on it. Some of them are restricted to a walled garden of trusted medical sources instead of having the internet.
Part of the message I think I oughta promote is that we should hold out for something specific. The issue is that when it comes to research, it really is up to people what they use—there’s no real oversight and there’s not regulations to stop them like if they were actually provably putting patient info in there. But they’re still going to be bringing what they “learn” into practice, as well as polluting the commons (since we know at this point that peer review doesn’t do much and it’s mostly peoples’ academic integrity keeping it all from falling apart).
Part of what these companies with their GPTs are trying to sell themselves as being able to replace is the exact sort of checks and balances that stops the whole medical research commons being nothing but bullshit—critical appraisal and evidence synthesis.
thats about what i thought yeah
Thank you for the phrases, they seem useful.