The less-misleading user interface seems good to me, but I have strong reservations about the other four interventions.
To use the shoggoth-with-smiley-face-mask analogy, the way the other strategies are phrased sounds like a request to create new, creepier masks for the shoggoth so people will stop being reassured by the smiley-face.
From the conversation with 1a3orn, I understand that the creepier masks are meant to depict how LLMs / future AIs might sometimes behave.
But I would prefer that the interventions removed the mask altogether, that seems more truth-tracking to me.
(Relatedly, I’d be especially interested to see discussions (from anyone) on what creates the smiley-face-mask, and how entangled the mask is with the rest of the shoggoth’s behaviour.)
Note: I believe my reservations are similar to some of 1a3orn’s, but expressed differently.
If you have not already seen it, this report from CSET discusses the extent to which something as capable as GPT-3 changes the cost and effectiveness of disinformation and propaganda.
There was also a recorded discussion/seminar of the same topics with the authors of the report.
I don’t think it’s exactly what you’re looking for, but it seemed adjacent enough to be worth mentioning.