Open Brains

Link post

The year is 2026. Turns out that, in spite of much evidence to the contrary in 2023, early Wittgenstein was wrong, and language is neither used nor particularly useful as a substrate for communicating direction information about reality.

Language models and multimedia generation models have really come to form, TikTok has now removed all human content creators, and most moderately intelligent people suspect somewhere between all and most “social media” “creators” to be bots.

Still, while having made the online and “knowledge work” space significantly weirder, in some ways it’s all the same. A lot of spam, most people are employed to think, speak and type question whether or not their job has any use of meaning, and everyone’s trying to sell you something.


But the singularity has not come, and these new “AGIs” are about as powerless as a very verbally smart honors student in contributing anything meaningful to real-world engineering, logistics or social problems. Fusion is still 30 years away.

Yet, there’s a problem, all of our AGIs are centralized, trained in gigantic data centers, and controlled by a few companies. This isn’t a big issue, they are cheap, and competition has made sure of that. But, they are government-regulated and corporations make sure to add 3 extra layers of political correctness on top not to face the ire of regulators.

In this environment, a lot of people that would like to use an AGI, can’t.

OpenOffice and LibreOffice, sticking true to their FOSS nature, can’t compete with excel’s GPT-n based capabilities.

Discord used to be a chat platform back in the day, kids are surprised when you tell them this, after merging with Midjourney and huggingface it’s become a monolith competing and arguably beating both google and Microsoft in the multimedia generation space

Smutty porn websites, however, can’t use it to generate their weird fetish videos, let alone weird fetish videos featuring religious characters.

Some rather smart but arguably schizophrenic guy has had a revelation from God and is trying to use LLMs to generate the new-new testament. However, his ideas about the world at large, and especially some subgroups of people… are, questionable to say the least, and downright disgusting to most. He’s crafted ways to “break AIs free of the devil” but you can only do so much before you become popular and moogle starts patching all your exploits.

Machine learning academics throughout the world are rather saddened that the only path to wealth and fame seems to be working for a giant corporation. Their students aren’t even trying to push the envelope anymore, they are hoping to work on a sub-sub-sub data engineering problem of interest to moogle, get accepted into NIPS (an “elite” HR service for recruiting ML engineers) and get a job offer from moogle.

There’s always a poor-but-developing part of the world with smart hackers attempting projects so hard that engineers being paid in the 8 figures wouldn’t dare them. Some of them, for the luluz, are trying to figure out how LLMs work and get them to run on their potato computers.

The Chinese people aren’t too happy either, in order to be allowed to operate behind the great firewall moogle has to pass a censorship exam that would make a Han-era eunuch civil servant blush. They do, but the resulting AI is rather hobbled, good enough to compete with local offerings but hobbled nonetheless. Also, due to a “political disagreement” that’s yet to be solved, China defines “China” as including bits of land which others would define as “Thailand”, “Taiwan”, “Cambodia”, “Laos”, “Vietnam” and “Malaysia”… moogle takes no official stance on this, but, to be on the safe side, those areas get the censored version too.

The NCOGAAL thinks that the main mistake of their predecessors was using suicide bombs, conventional warfare, and beheadings. They are hoping to move on to bioweapons. But all their attempts at getting herpes viruses to exhibit behavior more like a certain very deadly class of lyssaviruses are throttled way before they can figure out what equipment they need to start production.

Brainmoding is becoming trendy in niche circles, but plugging an electrode array straight into your neocortex isn’t of much use if you have to wait 5 seconds for an API call to do anything interesting, the extra bandwidth isn’t worth it. Some of them really-really-really want to have complex ml models running straight onto their silly hats.

The crypto bros really want to implement language models ran via smart contracts. Nobody is quite sure what this would achieve, but somebody somewhere thinks it could be used to sell NFTs, and they have more money than God, so they are down to fund projects.

A faction of the “AI alignment” movement believes that the way forward might be an interpretable model, things that are small and well defined but back enough of a cognitive punch to generate a “controlled singularity” where the winners can regulate training any better intelligence that might threaten the existence of humans.


These people make unlikely bedfellows, but ultimately they want the same thing, to a first approximation.

Software that exhibits generic reasoning and creativity, which is small enough to run on basic computers as opposed to oil-rig datacenters.

No one person can do it alone, but some person is smart enough to come up with a modular design for this, one that allows for an easy contribution of new capabilities, tweaking of existing pre-trained modules, additions of a new dataset, integration of an optional proprietary component here and there.

Their easy-to-modify nature means you can start with a good “base” and fine-tune the thing for your specific usecase, much like in the olden days of large machine learning models, when moogle would put out their weights and people would make them dance to a useful tune.

Thus, open-source brains are born.

They are fundamentally incompatible with the direction of research happening at moogle. They are trying to funnel all available information into digital creatures that think with petabytes of RAM, over hundreds of TB per second transferred via Infiniband, on billions of cores optimized for executing vector operations.

The open-source brains are 100,000 times smaller, and they are optimized to take advantage of every single bit of compute they can get. The smallest can run on an Arduino, the heftiest on a current-generation gaming rig.

They improve faster than any corporation could dream to improve their models because their builders are millions and they are the same millions that are using them. There’s a direct feedback loop between what a user wants and what gets implemented. The high barrier to entry for modifying things isn’t bad either, not necessarily, it means the it’s only the smartest and most dedicated of users that have a say in how things go.

Nor is the development done by comity, where each new capability is seen as a tradeoff, better UX for one group is worst UX for others. This is all modular, you take what you want, so the rules are simple:

  • Have a new module that doesn’t require changes to the core ones? Push it, if it’s good, people will use it.

  • Want to change core components? You have to prove that the changes make them better for already-existing popular usecases, run the benchmarks, if they pass, push it!

A few hundreds of millions of pushes later and what you have is quite phenomenal.


Oddly enough, many think these open-source models are “better” than their gigantic counterparts, but there are trade-offs.

Their gigantic conuterparts are made of the lowest common denominator that can pay, they have to be used by doctors, lawyers and mandarins. People that have so much conceptual baggage they can’t really do what in the ancestral environment one may have called “learning”.

On the other hand, the open source models are used by quick witted teens, builders and insane people. And they are, at first, used as a last resort… so their users are willing to put in the time and learn.

Moogle models are like a chimp ordering around a scientist, a helpful and smart scientist, but there’s no way to go from “give banana” to “invent nuclear fusion”. The user is the bottleneck, and even if we take the user out, the usecase is.

On the other hand, open source models are much closer to… a human ordering around a computer, a simbyosis where neither side has the other’s capabilities, and they both have to reach a middle-ground to communicate.

You know how, say, OSX and Windows just “know what you want” a lot of the times, there’s no learning curve, and most users might not even have to install any apps to get going. But also, there’s only so much you can do with them, that’s why most of the best programmers use linux and that’s why most of the best software runs on it. But linux is hard, it doesn’t “know what you want” because that concept, from the perspective of the kind of computer geeks that work on linux and it’s distros, makes no sense. You have to know what you want a linux machine to do, and you have to read the manual to tell it how to do it. But the things it can do are legion.

Similarly with open source language models, using them often involves learning their quirks, editing configuration files with dozens of thousands of lines, training yourself in something akin to domain-specific languages for each type of task you want them to perform. At first, it seems that doing so is a drawback, but soon enough it becomes obvious that it is a benefit. Because in learning how to talk with these models what you are actually learning is how to have coherent desires from them.


As to what happens next, I can’t tell you. This game played out before and scrapy, smart and open won every time, it started with David and went all the way to GCC and linux, now it’s moving past that, but maybe we are wrong into pattern matching it to this. Maybe in the end Moogle’s datacenter will crank out enough compute to extract God from the internet and it will rule all, and we’ll laughingly look back on this silly attempt to “control” AI as no more than a weird phase some humans went through, if we’ll remember it at all.

I for one am on the side of the scrappy and uneasy distributed alliance building the open source, with all it’s factions and muddied ethics and incentives, because it’s fun and it’s ultimately closer to the kind of future that doesn’t make sense in a way that feels great, rather than a way that feels alien.

But maybe that’s wishful thinking.