Just noting for the audience that the edits which Anna references in her reply to CronoDAS, as if they had substantively changed the meaning of my original comment, were to add:
The phrase “directly observed”
The parenthetical about having good epistemic hygiene with regards to people’s protestations to the contrary
The bit about agendas often not being made explicit
It did not originally specify undisclosed conflicts of interest in any way that the new version doesn’t. Both versions contained the same core (true) claim: that multiple of the staff members common to both CFAR!2017 and CFAR!2025 often had various (i.e. notonly the AI stuff) agendas which would bump participant best interests to second, third, or even lower on the priority ladder.
I’ve also added, just now, a clarifying edit to a higher comment: “Some of these staff members are completely blind to some centrally important axes of care.” This seemed important to add, given that Anna is below making claims of having seen, modeled, and addressed the problems (a refrain I have heard from her, directly, in multiple epochs, and taken damage from naively trusting more than once). More (abstract, philosophical) detail on my views about this sort of dynamic here.
> given that Anna is below making claims of having seen, modeled, and addressed the problems
I think I am mostly saying that I don’t agree that there were ever problems of the sort you are describing, w.r.t standard of care etc. That is: I think I and other CFAR staff were following the basics of standard deontology w.r.t. participants the whole time, and I think the workshops were good enough that it was probably better to be running them the whole time.
I added detail to caveat that and to try to make the conversation less confusing for the few who’re trying to follow it in a high-detail way.
Just noting for the audience that the edits which Anna references in her reply to CronoDAS, as if they had substantively changed the meaning of my original comment, were to add:
The phrase “directly observed”
The parenthetical about having good epistemic hygiene with regards to people’s protestations to the contrary
The bit about agendas often not being made explicit
It did not originally specify undisclosed conflicts of interest in any way that the new version doesn’t. Both versions contained the same core (true) claim: that multiple of the staff members common to both CFAR!2017 and CFAR!2025 often had various (i.e. not only the AI stuff) agendas which would bump participant best interests to second, third, or even lower on the priority ladder.
I’ve also added, just now, a clarifying edit to a higher comment: “Some of these staff members are completely blind to some centrally important axes of care.” This seemed important to add, given that Anna is below making claims of having seen, modeled, and addressed the problems (a refrain I have heard from her, directly, in multiple epochs, and taken damage from naively trusting more than once). More (abstract, philosophical) detail on my views about this sort of dynamic here.
> given that Anna is below making claims of having seen, modeled, and addressed the problems
I think I am mostly saying that I don’t agree that there were ever problems of the sort you are describing, w.r.t standard of care etc. That is: I think I and other CFAR staff were following the basics of standard deontology w.r.t. participants the whole time, and I think the workshops were good enough that it was probably better to be running them the whole time.
I added detail to caveat that and to try to make the conversation less confusing for the few who’re trying to follow it in a high-detail way.