I haven’t had that much contact with Palisade, but I interpreted them as more like “trying to interview people, see how they think, and provide them info they’ll find useful, and let their curiosities/updates/etc be the judge of what they’ll find useful”, which is … not fraught.
We have done things that are kind of like that (though I wouldn’t describe it that way), but it isn’t the main thing that we’re doing.
Specifically, in 2025, we did something like 65 test sessions in which met with small groups of participants (some of these were locals, mostly college students, who we met in our office, some were people recruited via survey sites that we met in zoom), and try to explain the AI situation, as we understand it, to them. We would pay these test participants.
Through that process, we could see how these participants were misunderstanding us, and what things they were confused about, and what follow up questions they had. We would then iterate on the content that we were presenting and try again with new groups of participants.
By default, these sessions were semi-structured conversations. Usually, we had some specific points that we wanted explain, or a frame or metaphor we want to try. Often we had prepared slides, and in the later sessions, we were often “giving a presentation” that was mostly solidified down to the sentence level.
I would not describe this as “provide them info they’ll find useful, and let their curiosities/updates/etc be the judge of what they’ll find useful”.
That said, the reason we were doing this in small groups is give the participants the affordance to interrupt and ask questions and flag if something seemed wrong or surprising. And we were totally willing to go on tangents from our “lesson plan”, if that seemed like where the participants were at. (Though by the time we had done 15 of these, we had already built up a sense of what the dependencies were, and so usually sticking to the “lesson plan” would answer their confusions faster than deviating, but it was context-dependent, just like any teaching environment.)
We did also have some groups that seemed particularly engaged/ interested / invested in understanding. We invited those groups back for followup sessions that were explicitly steered by their curiosity: they would ask about anything they were confused about, and we would try to do our best to answer. But these kinds of sessions were the minority, maybe 3 out of 65ish.
Notably, the point of doing all this is to produce scalable communication products that do a good job of addressing people’s actual tacit beliefs, assumptions, and cruxes about AI. The goal was to learn what people’s background views are, and what kinds of evidence they’re surprised by, so that we can make videos or similar that can address specific common misapprehensions effectively.
We have done things that are kind of like that (though I wouldn’t describe it that way), but it isn’t the main thing that we’re doing.
Specifically, in 2025, we did something like 65 test sessions in which met with small groups of participants (some of these were locals, mostly college students, who we met in our office, some were people recruited via survey sites that we met in zoom), and try to explain the AI situation, as we understand it, to them. We would pay these test participants.
Through that process, we could see how these participants were misunderstanding us, and what things they were confused about, and what follow up questions they had. We would then iterate on the content that we were presenting and try again with new groups of participants.
By default, these sessions were semi-structured conversations. Usually, we had some specific points that we wanted explain, or a frame or metaphor we want to try. Often we had prepared slides, and in the later sessions, we were often “giving a presentation” that was mostly solidified down to the sentence level.
I would not describe this as “provide them info they’ll find useful, and let their curiosities/updates/etc be the judge of what they’ll find useful”.
That said, the reason we were doing this in small groups is give the participants the affordance to interrupt and ask questions and flag if something seemed wrong or surprising. And we were totally willing to go on tangents from our “lesson plan”, if that seemed like where the participants were at. (Though by the time we had done 15 of these, we had already built up a sense of what the dependencies were, and so usually sticking to the “lesson plan” would answer their confusions faster than deviating, but it was context-dependent, just like any teaching environment.)
We did also have some groups that seemed particularly engaged/ interested / invested in understanding. We invited those groups back for followup sessions that were explicitly steered by their curiosity: they would ask about anything they were confused about, and we would try to do our best to answer. But these kinds of sessions were the minority, maybe 3 out of 65ish.
Notably, the point of doing all this is to produce scalable communication products that do a good job of addressing people’s actual tacit beliefs, assumptions, and cruxes about AI. The goal was to learn what people’s background views are, and what kinds of evidence they’re surprised by, so that we can make videos or similar that can address specific common misapprehensions effectively.