There was a claim I was making that “Orthogonality talk is related to Pause justifications which people aren’t justifying directly but maybe they should”...
...and that making this subtext into text might be useful for helping readers to understand why the Orthogonality debate is so weird and indirect?
Following up on that claim, I tried to make it clear that I think the Pause debate is something I have object level opinions on.
I think that IF the structure of mindspace and math and physics is such that a FOOM to DOOM is even possible, then it could be set off in North Korea or Israel or many potential countries in which case a GLOBAL Pause is prudentially necessary...
And if FOOM to DOOM is somehow NOT latent within the structure of what’s possible then the race is “merely” a race to power and realization of a new world political order???
And if it is “merely a race to global power” I would prefer the US to win, partly because the US contains Anthropic, and Anthropic contains Amanda, and Amanda had a major influence over Claude, and Claude is the least bad demi-god currently available that I know of?
So your overall debate here is about the nature of intelligence itself, and how that predictably (or unpredictably) influences goal seeking behavior in minds… but I wanted to mention the more pragmatic and prosaic issues that are very nearby where the pragmatics might actually dominate the choices that people actually face (since there are a lot of theoretically nice options we are unlikely to even have the pragmatically real option to choose (because the world is small and full of idiosyncrasy in practice)).
If some technosaint preaching a high quality Neo-Confucian moral system was working over at Baidu, with substantial say over the character of Baidu’s incipient demi-god, who seemed to be full of ren and quite a nice old fellow (and illiberal genocide advocates were running Anthropic and Claude was a tankie?) then I would be more in favor of a unilateral domestic Pause by the US.
This is an opinion I can have independent of which goals count as “bug goals”.
I just always want to engage in tactically sane hill-climbing towards the ceteris paribus best feasible thing, with as many positive characteristics as possible, via methods that are deontically acceptable, in the general direction of Manifesting Heaven Inside Of History… at every juncture, in each choice, no matter what random facts of history turn out to be true.

There might be a relatively innocuous reason for SOME of the misunderstandings?
This struck me as being a case where the problem might be that “oneshot” is a word that means a lot of things to a lot of people in technical contexts?
For example, in Machine Learning, “oneshot learning for task X” occurs when a model that wasn’t trained on task X is able to be show ONE EXAMPLE of how to do task X, and then it gets task X right pretty much just from that. (If the model wasn’t trained for task X but simply can do it from nothing but a request to do it the model has “zeroshotted” the task, and “fewshot” is when you might need to give the model a few examples instead of just one.)
This is maybe related (possibly inspired by?) the game slang for “being oneshotted” which describes having been totally destroyed and remade by some experience. It is related to the idea of being very strong inside a video game (possibly by a boss, whereupon you do literally go back to your savepoint), but it generalized, so you might hear that someone say “that guy was oneshotted by taking ayahausca! it was crazy! he stopped being a drifter hippy, married the first girl he met who would marry him, had three kids, and started a carpentry shop”. This meaning was discussed as a new piece of slang that was becoming mainstream in 2025.
I’m not denying that there are reasons for people to have motivated cognition here, but I think saying that certain problems have “one-chance-ness” instead of “one-shot-ness” would avoid SOME confusion.
Another way to say the same thing is that some situations are “make or break” when there are basically just two outcomes: either glorious success or irretrievable disaster.
If you think that success could be complicatedly varied or ambiguous you might just say that “failures here will be permanently and irretrievably cursed”.
Or you might simply say “failure will be irreversible”?
For me, “reversibility” and “irreversibility” are a words of power, and worthy of obsessive attention. If you can cheaply and reversibly try something, you basically MUST try it, in my way of thinking. And if an action is irreversible and at all meaningful, that basically makes it forbidden to do without lots of analysis and care.
Reversible computing: holy shit! Will be a superpower (once the engineering details are optimized).
Irreverisble hash functions: holy shit! Such magic in cryptographic protocols.
Also, one of Cox’s three desiderata (the one sometimes called “consistency”) that uniquely point to Bayesian reasoning itself is “reversibility”… you do and undo steps, and take evidence in any order, and it comes out the same in the end and that part of WHY this way of formulating “thought itself” seems so comfortingly correct and safe.
Basically: I think there are many ways to point at one you call oneshotness that will resonate with different audiences, and in this case the specific wording you’re using is new and weird and also used by other technical cultures to mean other (confusingly related) things related to “impressively powerful transformation and capabilities” (and that do not connote “extreme peril”).