If misalignment of LLM-like AI due to contamination from pretraining data is an issue, it would be better and more feasible to solve that by AI companies figuring out how to (e.g.) appropriately filter the pretraining data, rather than everyone else in the world self-censoring their discussions about how the future might go. (Superintelligence might not be an LLM, after all!) See the “Potential Mitigations” section in Alex Turner’s post on the topic.
If misalignment of LLM-like AI due to contamination from pretraining data is an issue, it would be better and more feasible to solve that by AI companies figuring out how to (e.g.) appropriately filter the pretraining data, rather than everyone else in the world self-censoring their discussions about how the future might go. (Superintelligence might not be an LLM, after all!) See the “Potential Mitigations” section in Alex Turner’s post on the topic.