the tensor is a lonely place

Link post

[Italics from original missing here.]

There is a meteor falling towards a large lake. At first, its reflection in the lake is very small. At first, it has no shadow at all. Next, its shadow becomes very large and mild. Gradually, at first, and then very quickly, the shadow shrinks, becomes darker. The reflection, too, shrinks. At last, the shadow, the reflection, and the meteor all meet — their proportions by now roughly the same, the mappings between their representations more straightforward than ever, and then there is a splash.

There has been no form automated, so far, that was not already the stuff of an artificial intelligence. This is not some sort of coincidence. It has to do with the structure of our conceptions of automation, intelligence, artifice. If a form has been successfully automated, then it had a regularity, a pattern. If a pattern was being carried out by an intelligence, the intelligence was leaning heavily on the pattern. In its capacity as the executor of a pattern or rule, intelligence is already artificial — that is, it is already the intelligence of that artifice, that rule, which it is following. More precisely, the intelligence has become indistinguishable from — or of roughly the same proportions as — the pattern, within the bounds of this one particular form. In the case of an individual truly devoted to the form in question, the individual’s intelligence has been effectively subsumed — dissolved — in the form. (Worship as the identification of oneself with a pattern.) The existential hysteria over artificial intelligence, which is far from peaking, is therefore both nonsensical and wise. It is nonsensical because, if the machine can do it, then it was not such a very special, human thing. It is wise, because, unlike in the case of the meteor, as the reflection of the machine grows larger, that which is special about the human shrinks; we are not watching the meteor hit our world from the shore, we are riding the meteor, and the reflection is, in growing, pulling us closer to itself.

A language model is, roughly, a multidimensional index of patterns. The question of whether the scaling of language models can take them (the language models) all the way through to human intelligence sounds like a question about whether language models are missing some quasi-tangible kernel that is present in humans (present in their brains, in their physical forms and ecosystems, etc.), but it is actually a question of the extent to which a given human phenomenon (pattern, behavior, language-game, etc.) has been or could be indexed. Because intelligence has some broad-reaching value, it has already been heavily indexed; the arenas in which the game of intelligence is now played are smaller and the rules of the game more definite, today, than they have ever been in human history. This, too, is not just a lucky coincidence.

The religion — the exponential curve — at play in, for example, OpenAI, is principally that of measure, not that of intelligence, not even yet that of happiness, progress, productivity, optimization, etc. But this religion is far from peculiar to OpenAI or Silicon Valley or even the US; it is in fact the only extant religion in much or most of the world, if we are using the term “religion” to refer to the dominant cult. Sam Altman’s conception of the world — the broader strokes of which are probably shared by the voting majority of OpenAI’s executive team, the board, the engineers — is merely an intensification and expansion of this religion into language-games that, historically, have been inaccessible for economic or technological reasons.

“It knows the moves of the game, it can make the moves, but it does not understand what it is doing when it makes the moves.” This is, in one way, wise, and in another way, nonsensical. It is wise because, there are, after all, some games for which the grammar of “understanding” includes, not just subsequent accuracy of play in analogous but unknown terrain, but also the feeling that the other person has understood; that this understanding is, as it were, involuntary; that the other person is not enacting the subtle, myriad behaviors of understanding as a means to the end of making you (the person who wants to be understood) feel understood; that it is not enacting these behaviors merely because it has been trained or incentivized to do so; that it is not moving like this merely because this is the move, here (this is the proper move, this is, on average, the move to which its weights direct it); that, in short, it never strikes you (the person who wants to be understood) that there is just a mechanism on the other side. It is nonsensical because if a behavior is a behavior then it can be identified, indexed, embedded in the existing gradients, a simulation could be set up, etc. — “But then what would have happened to the rule that these behaviors be those of a human and not a simulation, specifically?” — Nothing would have happened to that rule. — But we are now just playing games with “simulation,” “seeming,” etc. Practically speaking, there has to be enough demand for such a simulation to be established and the solution for the problem has to be cheap enough relative to that demand, and this cost-demand tuple has to be itself more heavily weighted than other cost-demand tuples available in order for this (the creation of such a deep fake) to be the move, here (the proper move; the move to which, on average, our weights and incentives direct us). People will undoubtedly speculate themselves into oblivion, the Hudson River, and the Golden Gate Strait over the question of whether — and if so, why — such an ultra-deep-fake humanoid intelligence could or would be made, but for now it is enough to say that OpenAI, Google, Microsoft, etc., are optimizing less for similarity to humans and more for those curves which measure us. Which makes sense.

(“It seems — but it isn’t.” — ‘But then — in what way precisely isn’t it? What is the difference, here? You have to be able to show me a measurable difference!’)

Was there ever intelligence that was not artificial? Show me. Show me what made it not artificial. — “It broke patterns. It generated new patterns.” — Which patterns did it break? How are we defining a break? What constitutes a new pattern? (Which breakages yield alienation (cf. “hallucination”)?) Is there a pattern to schizophrenic word salad? Well, of course there is, but that pattern may have very little to do with patterns within the words themselves. In fact, one of the characteristic patterns of word salad is that the arrangement of the words is unpredictable, but, a priori, that means the word salad is not grammar-less; the words constitute a pattern. A language-game is being played with these words. The game is not being played in the words — the grammar has nothing to do with the relation of these words to one another — it has to do with the relation of this form of speech to those of the other people on the subway. For now, it seems that OpenAI is going to select for novelties that are most likely to generate GDP growth and quality of life improvements, where the latter is defined more or less as the UN would define it (problems are easily solved, no one is hungry, there is optimism, opportunity, otium sine bello). This is part of why he’s going to be able to execute: he is speaking our language, which is to say, the language of the West.

And that takes us to part 2: the sense in which the sinusoid of language models is of a different scale than that of other recent memes.

No comments.