A language model (or the language-model-like part of a person) alone can’t really grok the end of the world properly. The end of the world is so extreme (it’s the one event so extreme it’s always safe to assume it hasn’t happened yet) that it’s way out of sample.
People increasing xrisk will be cheerlead by their LLMs the whole way
People increasing xrisk will be cheerlead by their LLMs the whole way