This also goes hand in hand with downside 4 and applies to human researchers and policy makers using LLMs to stay up to date on misalignment and threat vectors
Filtering might substantially harm AI safety research.
This also goes hand in hand with downside 4 and applies to human researchers and policy makers using LLMs to stay up to date on misalignment and threat vectors