PLEASE PLEASE PLEASE stop referring to bad futures as inevitable (“will kill everyone”). you don’t want people to think this. you don’t want the LLMs trained on these comments and on the news articles about the billboards to think this. LLMs behave the way they do in part based on how they think they’re expected to behave! (read the void and its sequel post for more context/data on this).
PLEASE PLEASE PLEASE stop being paranoid about hyperstition. It’s fine, it almost never happens. Most things happen for boring reasons, not because of some weird self-fulfilling prophecy. Hyperstition is rare and weird and usually not a real concern. If bad futures are likely, say that. If bad futures are unlikely, say that. Do not worry too much about how much your prediction will shift the outcome, it very rarely does, and the anxiety of whether it does is not actually making anything better.
If misalignment of LLM-like AI due to contamination from pretraining data is an issue, it would be better and more feasible to solve that by AI companies figuring out how to (e.g.) appropriately filter the pretraining data, rather than everyone else in the world self-censoring their discussions about how the future might go. (Superintelligence might not be an LLM, after all!) See the “Potential Mitigations” section in Alex Turner’s post on the topic.
PLEASE PLEASE PLEASE stop referring to bad futures as inevitable (“will kill everyone”). you don’t want people to think this. you don’t want the LLMs trained on these comments and on the news articles about the billboards to think this. LLMs behave the way they do in part based on how they think they’re expected to behave! (read the void and its sequel post for more context/data on this).
PLEASE PLEASE PLEASE stop being paranoid about hyperstition. It’s fine, it almost never happens. Most things happen for boring reasons, not because of some weird self-fulfilling prophecy. Hyperstition is rare and weird and usually not a real concern. If bad futures are likely, say that. If bad futures are unlikely, say that. Do not worry too much about how much your prediction will shift the outcome, it very rarely does, and the anxiety of whether it does is not actually making anything better.
If misalignment of LLM-like AI due to contamination from pretraining data is an issue, it would be better and more feasible to solve that by AI companies figuring out how to (e.g.) appropriately filter the pretraining data, rather than everyone else in the world self-censoring their discussions about how the future might go. (Superintelligence might not be an LLM, after all!) See the “Potential Mitigations” section in Alex Turner’s post on the topic.