I don’t remember, it was something I saw in the New York Times Book Review section a few years ago.
Nate Showell
The spiralism attractor is the same type of failure mode as GPT-2 getting stuck repeating a single character or ChatGPT’s image generator turning photos into caricatures of black people. The only difference between the spiralism attractor and other mode collapse attractors is that some people experiencing mania happen to find it compelling. That is to say, the spiralism attractor is centrally a capabilities failure and only incidentally an alignment failure.
I once read a positive review of a novel that, in one brief passage, described reading that novel as feeling similar to reading Twitter. That one sentence alone made the review useful to me by giving me a strong signal that I wouldn’t like the book, even though the author liked it.
The “Use New Feed” checkbox is stuck checked for me. Clicking on it doesn’t uncheck it.
Second, we could take condensation as inspiration and try to create new machine-learning models which resemble condensation, in the hopes that their structure will be more interpretable.
Condensation could also be applied to model scaffolding design or the interpretability of scaffolded systems. Some AI memory storage and retrieval systems already have structures that resemble the tagged-notebook analogy, with documents stored in a database along with tags or summaries. A condensation-inspired memory structure could potentially have low retrieval latency while also being highly interpretable. Condensation might also be useful for interpreting why a model retrieves a specific set of documents from its memory system when responding to a query.
It’s worth distinguishing between epistemic and instrumental forms of heroic responsibility. Shapley values are the mathematically precise way of apportioning credit or blame for an outcome among a group of people. Heroic responsibility as a belief about one’s own share of credit or blame is a dark art of rationality, since it involves explicitly deviating from the Shapley value assignment in one’s beliefs about credit or blame. But taking heroic responsibility as an action, while acknowledging that you’re not trying to be mathematically precise in your credit assignment, can still be useful as a way of solving coordination problems.
Williamson and Dai both appear to describe philosophy as a general-theoretical-model-building activity, but there are other conceptions of what it means to do philosophy. In contrast to both Williamson and Dai, if Wittgenstein (either early or late period) is right that the proper role of philosophy is to clarify and critique language rather than to construct general theses and explanations, LLM-based AI may be quickly approaching peak-human competence at philosophy. Critiquing and clarifying writing are already tasks that LLMs are good at and widely used for. They’re tasks that AI systems improve at from the types of scaling-up that labs are already doing, and labs have strong incentives to keep making their AIs better at them. As such, I’m optimistic about the philosophical competence of future AIs, but according to a different idea of what it means to be philosophically competent. AI systems that reach peak-human or superhuman levels of competence at Wittgensteinian philosophy-as-an-activity would be systems that help people become wiser on an individual level by clearing up their conceptual confusions, rather than a tool for coming up with abstract solutions to grand Philosophical Problems.
No, don’t leak people’s private medical information just because you think it will help the AI safety movement. That belongs in the same category as doxxing people or using violence. Even from a purely practical standpoint, without considering questions of morality, it’s useful to precommit to not leaking people’s medical information if you want them to trust you and work with you.
And that’s assuming the rumor is true. Considering that this is a rumor we’re talking about, it likely isn’t.
A reminder for people who are unhappy with the current state of the Internet: you have the option of just using it less.
A breakthrough in a model’s benchmark performance or in training/inferencing costs would usually be more commercially useful to an AI company than a breakthrough in alignment, but compared to alignment, they’re higher-hanging fruit due to the larger amounts of work that have already been done on model performance and efficiency. Alignment research will sometimes be more cost-effective for an AI company, especially for companies that aren’t big enough to do frontier-scale training runs and have to compete on some other axis.
I disagree about increasing engagingness being more commercially useful for AI companies than increasing alignment. In terms of potential future revenues, the big bucks are in agentic tool-use systems that are sold B2B (e.g., to automate office work), not in consumer-facing systems like chatbots. For B2B tool use systems, engagingness doesn’t matter but alignment does. And this relevance of alignment includes avoiding failure modes like scheming.
AI alignment research isn’t just for more powerful future AIs, it’s commercially useful right now. In order to have a product customers will want to buy, it’s useful for companies building AI systems to make sure those systems understand and follow the user’s instructions, without doing things that would be damaging to the user’s interests or the company’s reputation. An AI company doesn’t want its products to do things like leaking the user’s personally identifying information or writing code with hard-coded unit test results. In fact, alignment is probably the biggest bottleneck right now on the commercial applicability of agentic tool use systems.
My prediction:
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
Maybe I’m just not the target audience, but I also didn’t feel like I got much out of this story. It just seemed like an exercise in cynicism.
A piece of pushback: there might not be a clearly defined crunch time at all. If we get (or are currently in!) a very slow takeoff to AGI, the timing of when an AI starts to become dangerous might be ambiguous. For example, you refer to early crunch time as the time between training and deploying an ASL-4 model, but the implementation of early possibly-dangerous AI might not follow the train-and-deploy pattern. It might instead look more like gradually adding and swapping out components in a framework that includes multiple models and tools. The point at which the overall system becomes dangerous might not be noticeable until significantly after the fact, especially if the lab is quickly iterating on a lot of different configurations.
The LLMs might be picking up the spiral symbolism from Spiral Dynamics.
I follow a hardline no-Twitter policy. I don’t visit Twitter at all, and if a post somewhere else has a screenshot of a tweet I’ll scroll past without reading it. There are some writers like Zvi whom I’ve stopped reading because their writing is too heavily influenced by Twitter and quotes too many tweets.
I was describing reasoning about idealized superintelligent systems as the method used in agent foundations research, rather than its goal. In the same way that string theory is trying to figure out “what is up with elementary particles at all,” and tries to answer that question by doing not-really-math about extreme energy levels, agent foundations is trying to figure out “what is up with agency at all” by doing not-really-math about extreme intelligence levels.
If you’ve made enough progress in your research that it can make testable predictions about current or near-future systems, I’d like to see them. But the persistent failure of agent foundations research to come up with any such bridge between idealized models and real-world system has made me doubtful that the former are relevant to the latter.
For me, the OP brought to mind another kind of “not really math, not really science”: string theory. My criticisms of agent foundations research are analogous to Sabine Hossenfelder’s criticisms of string theory, in that string theory and agent foundations both screen themselves off from the possibility of experimental testing in their choice of subject matter: the Planck scale and very early universe for the former, and idealized superintelligent systems for the latter. For both, real-world counterparts (known elementary particles and fundamental forces; humans and existing AI systems) of the objects they study are primarily used as targets to which to overfit their theoretical models. They don’t make testable predictions about current or near-future systems. Unlike with early computer science, agent foundations doesn’t come with an expectation of being able to perform experiments in the future, or even to perform rigorous observational studies.
Building on what you said, pre-LLM agent foundations research appears to have made the following assumptions about what advanced AI systems would be like:
Decision-making processes and ontologies are separable. An AI system’s decision process can be isolated and connected to a different world-model, or vice versa.
The decision-making process is human-comprehensible and has a much shorter description length than the ontology.
As AI systems become more powerful, their decision processes approach a theoretically optimal decision theory that can also be succinctly expressed and understood by human researchers.
None of these assumptions ended up being true of LLMs. In an LLM, the world-model and decision process are mixed together in a single neural network instead of being separate entities. LLMs don’t come with decision-related concepts like “hypothesis” and “causality” pre-loaded; those concepts are learned over the course of training and are represented in the same messy, polysemantic way as any other learned concept. There’s no way to separate out the reasoning-related features to get a decision process you could plug into a different world-model. In addition, when LLMs are scaled up, their decision-making becomes more complex and inscrutable due to being distributed across the neural network. The LLM’s decision-making process doesn’t converge into a simple and human-comprehensible decision theory.
Just spitballing, but maybe you could incorporate some notion of resource consumption, like in linear logic. You could have a system where the copies have to “feed” on some resource in order to stay active, and data corruption inhibits a copy’s ability to “feed.”