The “aspiring” in “aspiring rationalist” seems like superfluous humility at best. Calling yourself a “rationalist” never implied perfection in the first place. It’s just like how calling yourself a “guitarist” doesn’t mean you think you’re Jimi Hendrix. I think this analogy is a good one, because rationality is a human art, just like playing the guitar.
I suppose one might object that the word “rational” denotes a perfect standard, unlike playing the guitar. However, we don’t hesitate to call someone an “idealist” or a “perfectionist” when they’re putting in a serious effort to conform to an ideal or strive towards perfection, so I think this objection is weak. The “-ist” suffix already means that you’re a person trying to do the thing, with all the shortcomings that entails.
Furthermore, it appears harmful to add the “aspiring”. It creates dilution. Think of what it would mean for a group of people to call themselves “aspiring guitarists”. The trouble is, it also applies to the sort of person who daydreams about the adulation of playing for large audiences but never gets around to practicing. However, to honestly call yourself a “guitarist”, you would have to actually, y’know, play the guitar once in a while.
While I acknowledge I’m writing this many years too late, please consider dropping the phrase “aspiring rationalist” from your lexicon.
I tend not to use “rationalist” for myself—the implication of identity and mix of description and value signaling rubs me the wrong way. For those who are describing actual group membership, part of the “rationalist community”, I can see reasons to use “rationalist” and “aspiring rationalist” in different contexts, depending on what you’re signaling and to whom.
Outside of community identification, “aspiring rationalist” implies a focus on application of rationality to one’s personal life, where just “rationalist” is broader, and may only imply an interest in the topic.
Note: I should acknowledge that I don’t think this is terribly important, and my standard advice for naming and jargon discussions remains “if it matters, use more words”.
I get the point of view that we should be forthright about our goals, practices, and community affiliations. Nothing wrong with using a label to cultivate a sense of belonging. After all, Christians call themselves after their ideal of perfection, so why shouldn’t we?
I think part of the reason is that just about everybody wants to be rational. Not everybody wants to be a guitarist, Christian, perfectionist, or idealist.
Also, most groups have some way of telling whether somebody’s “doing the thing” or not. Catholics have the sacrament and you have to call him Jesus, not Frank. Guitarists practice or have chops. Just about everybody tries to think rationally from time to time, even if they fail, so what’s the thing that somebody would have to do to not be a rationalist?
Why don’t we call ourselves epistemologists. At least it’s one syllable shorter than “aspiring rationalist.” Plus, it comes with the implication that we’re interested in rational thought, not experts at doing it.
Funnily enough, I feel more trepidation about referring to myself as an epistemologist than as a “rationalist.” I think it sounds too much like a professional title. But heck, I’m an author even though I’ve never published a book. I’m a musician even though I don’t play professionally. Why can’t I be an epistemologist?
The explanations in the thread seem to me to be missing the middle or evading the heart of the problem. Zoomed out: an optimization target at level of personality. Zoomed in: a circuit diagram of layers. But those layers with billions of weights are pretty much Turing complete.
Unfortunately, I don’t think anyone has much idea how all those little learned computations are make up said personality. My suspicion is there isn’t going to be an *easy* way to explain what they’re doing. Of course, I’d be relieved to be wrong here!
This matters because the analogy in the thread between averaged faces and LLM outputs is broken in an important way. (Nearly) every picture of a face in the training data has a nose. When you look at the nose of an averaged face, it’s based very closely on the noses of all the faces that got averaged. However, despite the size of the training datasets for LLMs, the space of possible queries and topics of conversation is even vaster (it’s exponential in the prompt-window size, unlike the query space for the average faces which are just the size of the image).
As such, LLMs are forced to extrapolate hard. So, I’d expect that which particular generalizations they learned, hiding in those weights, to start to matter once users start poking them in unanticipated ways.
In short, if LLMs are like averaged faces, I think they’re faces that will readily fall apart into Shoggoths if someone looks at them from an unanticipated or uncommon angle.
Another disanalogy is in how GPT-4 writes novel quines without thinking out loud in the context window. It still needs to plan it, so the planning probably happens with layers updating the residual stream, the way it could’ve happened with thinking step by step, but using the inscrutable states of the network instead of tokens. Thinking step by step in tokens imitates humans from its training data, but who knows how the thinking step by step in the residual stream works.
Thus shoggoths might be the first to wake up, because models might already be training on this hypothetical alien deliberation in the residual stream, while human-imitating deliberation with generated tokens is still not being plugged back into the model as training data. This hypothesis also predicts future LLMs that are broadly trained the same as modern LLMs, still look non-agentic and situationally unaware like modern LLMs, but start succeeding in discussing advanced mathematics, because the necessary process of studying it (inventing and solving of exercises that are not already in the training set) might happen by alien deliberation within the residual stream during the training process, while SSL looks at episodes that involve related theory.
One of my pet journalism peeves is the “as” (or sometimes “while”) construction, which I often see in titles or first sentences of articles. It looks like “<event A was happening> as <event B was happening>”. You can fact check the events and it’ll turn out they happened, but the phrasing comes with this super annoying nudge-nudge-wink-wink-implication that the two events totally have direct causal connection. Unfortunately, you can’t pin this on the journalist because they didn’t actually say it.
This sort of thing happens a lot. To give just a couple example templates, articles like “as <political thing happened>, markets rallied” or “<stock> falls as <CEO did something>” are often trying to pull this.
“Aspiring Rationalist” Considered Harmful
The “aspiring” in “aspiring rationalist” seems like superfluous humility at best. Calling yourself a “rationalist” never implied perfection in the first place. It’s just like how calling yourself a “guitarist” doesn’t mean you think you’re Jimi Hendrix. I think this analogy is a good one, because rationality is a human art, just like playing the guitar.
I suppose one might object that the word “rational” denotes a perfect standard, unlike playing the guitar. However, we don’t hesitate to call someone an “idealist” or a “perfectionist” when they’re putting in a serious effort to conform to an ideal or strive towards perfection, so I think this objection is weak. The “-ist” suffix already means that you’re a person trying to do the thing, with all the shortcomings that entails.
Furthermore, it appears harmful to add the “aspiring”. It creates dilution. Think of what it would mean for a group of people to call themselves “aspiring guitarists”. The trouble is, it also applies to the sort of person who daydreams about the adulation of playing for large audiences but never gets around to practicing. However, to honestly call yourself a “guitarist”, you would have to actually, y’know, play the guitar once in a while.
While I acknowledge I’m writing this many years too late, please consider dropping the phrase “aspiring rationalist” from your lexicon.
Hm, I like this, I feel resolved against ‘aspiring rationalist’, which was always losing anyway because it’s a longer and less catchy phrase.
I tend not to use “rationalist” for myself—the implication of identity and mix of description and value signaling rubs me the wrong way. For those who are describing actual group membership, part of the “rationalist community”, I can see reasons to use “rationalist” and “aspiring rationalist” in different contexts, depending on what you’re signaling and to whom.
Outside of community identification, “aspiring rationalist” implies a focus on application of rationality to one’s personal life, where just “rationalist” is broader, and may only imply an interest in the topic.
Note: I should acknowledge that I don’t think this is terribly important, and my standard advice for naming and jargon discussions remains “if it matters, use more words”.
I get the point of view that we should be forthright about our goals, practices, and community affiliations. Nothing wrong with using a label to cultivate a sense of belonging. After all, Christians call themselves after their ideal of perfection, so why shouldn’t we?
I think part of the reason is that just about everybody wants to be rational. Not everybody wants to be a guitarist, Christian, perfectionist, or idealist.
Also, most groups have some way of telling whether somebody’s “doing the thing” or not. Catholics have the sacrament and you have to call him Jesus, not Frank. Guitarists practice or have chops. Just about everybody tries to think rationally from time to time, even if they fail, so what’s the thing that somebody would have to do to not be a rationalist?
Why don’t we call ourselves epistemologists. At least it’s one syllable shorter than “aspiring rationalist.” Plus, it comes with the implication that we’re interested in rational thought, not experts at doing it.
Funnily enough, I feel more trepidation about referring to myself as an epistemologist than as a “rationalist.” I think it sounds too much like a professional title. But heck, I’m an author even though I’ve never published a book. I’m a musician even though I don’t play professionally. Why can’t I be an epistemologist?
In Defense of the Shoggoth Analogy
In reply to: https://twitter.com/OwainEvans_UK/status/1636599127902662658
The explanations in the thread seem to me to be missing the middle or evading the heart of the problem. Zoomed out: an optimization target at level of personality. Zoomed in: a circuit diagram of layers. But those layers with billions of weights are pretty much Turing complete.
Unfortunately, I don’t think anyone has much idea how all those little learned computations are make up said personality. My suspicion is there isn’t going to be an *easy* way to explain what they’re doing. Of course, I’d be relieved to be wrong here!
This matters because the analogy in the thread between averaged faces and LLM outputs is broken in an important way. (Nearly) every picture of a face in the training data has a nose. When you look at the nose of an averaged face, it’s based very closely on the noses of all the faces that got averaged. However, despite the size of the training datasets for LLMs, the space of possible queries and topics of conversation is even vaster (it’s exponential in the prompt-window size, unlike the query space for the average faces which are just the size of the image).
As such, LLMs are forced to extrapolate hard. So, I’d expect that which particular generalizations they learned, hiding in those weights, to start to matter once users start poking them in unanticipated ways.
In short, if LLMs are like averaged faces, I think they’re faces that will readily fall apart into Shoggoths if someone looks at them from an unanticipated or uncommon angle.
Another disanalogy is in how GPT-4 writes novel quines without thinking out loud in the context window. It still needs to plan it, so the planning probably happens with layers updating the residual stream, the way it could’ve happened with thinking step by step, but using the inscrutable states of the network instead of tokens. Thinking step by step in tokens imitates humans from its training data, but who knows how the thinking step by step in the residual stream works.
Thus shoggoths might be the first to wake up, because models might already be training on this hypothetical alien deliberation in the residual stream, while human-imitating deliberation with generated tokens is still not being plugged back into the model as training data. This hypothesis also predicts future LLMs that are broadly trained the same as modern LLMs, still look non-agentic and situationally unaware like modern LLMs, but start succeeding in discussing advanced mathematics, because the necessary process of studying it (inventing and solving of exercises that are not already in the training set) might happen by alien deliberation within the residual stream during the training process, while SSL looks at episodes that involve related theory.
One of my pet journalism peeves is the “as” (or sometimes “while”) construction, which I often see in titles or first sentences of articles. It looks like “<event A was happening> as <event B was happening>”. You can fact check the events and it’ll turn out they happened, but the phrasing comes with this super annoying nudge-nudge-wink-wink-implication that the two events totally have direct causal connection. Unfortunately, you can’t pin this on the journalist because they didn’t actually say it.
This sort of thing happens a lot. To give just a couple example templates, articles like “as <political thing happened>, markets rallied” or “<stock> falls as <CEO did something>” are often trying to pull this.