In the pre-LLM era, it seemed more likely (compared to now) that there was an algorithmically simple core of general intelligence, rather than intelligence being a complex aggregation of skills. If you’re operating under the assumption that general intelligence has a simple algorithmic structure, decision theory is an obvious place to search for it. So the early focus on decision theory wasn’t random.
Nate Showell
There are the terms “closed individualism,” “open individualism,” and “empty individualism” used in this Qualia Computing post.
My own experience is very different from those described in this post. I find it relaxing instead of stressful to spend time doing nothing, and felt this way even when I was a child and hadn’t started meditating regularly. I also don’t enjoy using a smartphone, due to the small screen size and reliance on touch inputs, so I don’t fill gaps in activities by browsing the Internet on my phone. It’s also common for me to have brief interactions with strangers even though I’m young. People frequently ask me for directions when I’m on my way to or from work.
The easiest way to promote justice is to focus on punishing people who behave badly (since that’s easier than rewarding people who behave well).
The premises of the toy model don’t require this to be true. Whether it’s true, and to what extent, can vary between environments.
The orthogonality question is an engineering question
People usually think about the orthogonality question (“Is the orthogonality thesis true?”) as a philosophical question. The usual way of approaching the orthogonality question is by taking a starting point of “assume an AGI exists” and then reasoning about what goals the AGI would have. But one can flip the usual starting point around and ask, for a specific goal, “is it realistically achievable to create a general intelligence that has this goal?” This reframing turns the orthogonality question into an engineering question that has more direct practical relevance than the philosophical version. The engineering version is a question about the types of results an AI developer can expect from different engineering decisions, rather than speculation about an idealized AGI; it’s grounded in what’s realistically achievable instead of what might be theoretically possible.
Instances of the engineering version of the orthogonality question also open the broader orthogonality question up to empirical testing. And so far, the empirical evidence we’ve received has pointed toward the answer “no.” Ever since the early days of reinforcement learning, researchers have been creating models with narrow goals, and so far, none of those systems has shown full generalization in the type of intelligence it’s developed. Protein-folding models only fold proteins; chess engines don’t model their environments outside the confines of the 64 squares. Language prediction has generalized further than most other training objectives, but language models still perform poorly at non-linguistic tasks (understanding images, acting within physical environments) and have jagged capabilities even within the set of language-based problems. Each new failure to create general intelligence from a narrow training objective is (usually weak) empirical evidence that narrow training signals are too impoverished to let a model develop highly general capabilities. Maybe general intelligence from a narrow goal would be possible with truly gargantuan amounts of compute, but recall that the engineering version of the orthogonality question is about what’s practically achievable.
There was likely a midwit-meme effect going on at the philosophy meetup, where, in order to distinguish themselves from the stereotypical sports-bar-goers, the attendees were forming their beliefs in ways that would never occur to a true “normie.” You might have a better experience interacting with “common people” in a setting where they aren’t self-selected for trying to demonstrate sophistication.
Just spitballing, but maybe you could incorporate some notion of resource consumption, like in linear logic. You could have a system where the copies have to “feed” on some resource in order to stay active, and data corruption inhibits a copy’s ability to “feed.”
I don’t remember, it was something I saw in the New York Times Book Review section a few years ago.
The spiralism attractor is the same type of failure mode as GPT-2 getting stuck repeating a single character or ChatGPT’s image generator turning photos into caricatures of black people. The only difference between the spiralism attractor and other mode collapse attractors is that some people experiencing mania happen to find it compelling. That is to say, the spiralism attractor is centrally a capabilities failure and only incidentally an alignment failure.
I once read a positive review of a novel that, in one brief passage, described reading that novel as feeling similar to reading Twitter. That one sentence alone made the review useful to me by giving me a strong signal that I wouldn’t like the book, even though the author liked it.
The “Use New Feed” checkbox is stuck checked for me. Clicking on it doesn’t uncheck it.
Second, we could take condensation as inspiration and try to create new machine-learning models which resemble condensation, in the hopes that their structure will be more interpretable.
Condensation could also be applied to model scaffolding design or the interpretability of scaffolded systems. Some AI memory storage and retrieval systems already have structures that resemble the tagged-notebook analogy, with documents stored in a database along with tags or summaries. A condensation-inspired memory structure could potentially have low retrieval latency while also being highly interpretable. Condensation might also be useful for interpreting why a model retrieves a specific set of documents from its memory system when responding to a query.
It’s worth distinguishing between epistemic and instrumental forms of heroic responsibility. Shapley values are the mathematically precise way of apportioning credit or blame for an outcome among a group of people. Heroic responsibility as a belief about one’s own share of credit or blame is a dark art of rationality, since it involves explicitly deviating from the Shapley value assignment in one’s beliefs about credit or blame. But taking heroic responsibility as an action, while acknowledging that you’re not trying to be mathematically precise in your credit assignment, can still be useful as a way of solving coordination problems.
Williamson and Dai both appear to describe philosophy as a general-theoretical-model-building activity, but there are other conceptions of what it means to do philosophy. In contrast to both Williamson and Dai, if Wittgenstein (either early or late period) is right that the proper role of philosophy is to clarify and critique language rather than to construct general theses and explanations, LLM-based AI may be quickly approaching peak-human competence at philosophy. Critiquing and clarifying writing are already tasks that LLMs are good at and widely used for. They’re tasks that AI systems improve at from the types of scaling-up that labs are already doing, and labs have strong incentives to keep making their AIs better at them. As such, I’m optimistic about the philosophical competence of future AIs, but according to a different idea of what it means to be philosophically competent. AI systems that reach peak-human or superhuman levels of competence at Wittgensteinian philosophy-as-an-activity would be systems that help people become wiser on an individual level by clearing up their conceptual confusions, rather than a tool for coming up with abstract solutions to grand Philosophical Problems.
No, don’t leak people’s private medical information just because you think it will help the AI safety movement. That belongs in the same category as doxxing people or using violence. Even from a purely practical standpoint, without considering questions of morality, it’s useful to precommit to not leaking people’s medical information if you want them to trust you and work with you.
And that’s assuming the rumor is true. Considering that this is a rumor we’re talking about, it likely isn’t.
A reminder for people who are unhappy with the current state of the Internet: you have the option of just using it less.
A breakthrough in a model’s benchmark performance or in training/inferencing costs would usually be more commercially useful to an AI company than a breakthrough in alignment, but compared to alignment, they’re higher-hanging fruit due to the larger amounts of work that have already been done on model performance and efficiency. Alignment research will sometimes be more cost-effective for an AI company, especially for companies that aren’t big enough to do frontier-scale training runs and have to compete on some other axis.
I disagree about increasing engagingness being more commercially useful for AI companies than increasing alignment. In terms of potential future revenues, the big bucks are in agentic tool-use systems that are sold B2B (e.g., to automate office work), not in consumer-facing systems like chatbots. For B2B tool use systems, engagingness doesn’t matter but alignment does. And this relevance of alignment includes avoiding failure modes like scheming.
AI alignment research isn’t just for more powerful future AIs, it’s commercially useful right now. In order to have a product customers will want to buy, it’s useful for companies building AI systems to make sure those systems understand and follow the user’s instructions, without doing things that would be damaging to the user’s interests or the company’s reputation. An AI company doesn’t want its products to do things like leaking the user’s personally identifying information or writing code with hard-coded unit test results. In fact, alignment is probably the biggest bottleneck right now on the commercial applicability of agentic tool use systems.
My prediction:
Nobody is actually going to use it. The general public has already started treating AI-generated content as pollution instead of something to seek out. Plus, unlike human-created shortform videos, a video generated by a model with a several-months-ago (at best) cutoff date can’t tell you what the latest fashion trends are. The release of Sora-2 has led me to update in favor of the “AI is a bubble” hypothesis because of how obviously disconnected it is from consumer demand.
On the first question, reaching superintelligence might require designing, testing, at-scale manufacturing, and installation of new types of computing hardware, which would probably take more than two years.