It’s unclear to me why the numbers you are report are only absolute numbers. I would expect that there are a bunch of factors that can make the wastewater more or less concentrated and thus have an effect on the reads but that you would want to filter out.
It’s my understanding that when doing influenza vaccines for a session the specific strain of the influenza virus matters a lot. Currently, the dashboard doesn’t seem to answer questions like “How does the dominant Influenza B strain differ today from last year?” and “Did the predictions they make at the beginning this year when they formulated the vaccine for this session actually pick the strains that are now dominant?”
The idea of seeking objectivity here is not helpful if you want to contribute to the scientific project. I think that Larry McEnerney is good at explaining why that’s the case, but you can also read plenty of Philosophy and History of Science on why that is.
If you want to contribute to the scientific project thinking about how what you doing relates to the scientific project is essential.
I’m not sure what you mean with “validity” and whether it’s a sensible thing to talk about. If you try to optimize for some notion of validity instead of optimizing for doing something that’s valuable to scientists, you doing something like trying to guess the teachers password. You are optimizing for form instead of optimizing for actually creating something valuable.
If you innovate in the method you are using in a way that violates some idea of conventional “validity” but you are providing value, you are doing well. Against Method wasn’t accidently chosen as title. When Feynman was doing his drawings the first reaction of his fellow scientists was that they weren’t “real science”. He ended up getting his Nobel Prize for them.
As far as novelty goes, the query you are proposing isn’t really a good way to determine novelty. To check novelty a better way is not to ask “Is this novel?” but “Is there prior art here?” Today, a good way to check that to run deep research reports. If your deep research request comes back with “I didn’t find anything” that a better signal for novelty than an question asking whether something is novel being answered with “yes”. LLMs don’t like to answer “I didn’t find anything if you let them run deep research request, they are much more willing to say something is novel when you ask them whether it’s novel.
Actually, a lot of scientific progress happens that way. You run experiments that they have results that you surprise you. You think about how to explain the results that you got and that brings you a better understanding of the problem domain you are interacting with.
If you want to create something intellectually valuable you need to go through the intellectual work of engaging with counter arguments to what you are doing. If an LLM provides a criticism of your work, that criticism might be valid or it isn’t. If what you are doing is highly complex, the LLM might now understand what you are doing and that doesn’t mean that your idea is doomed. Maybe, you can flesh out your idea more clearly. Even if you can’t and the idea provides value it’s still a good idea.