gwern
For extremely opinionated ‘tags’ like that, where their very existence is probably going too far, maybe users should be encouraged to simply use a comment in their Short Form to list URLs? Since looking up one’s Short Form comments to edit in a URL is annoying, possibly with some UX slapped on top for convenience: a widget on every page for “add to Personal List [A / B / C / D]” where a ‘Personal List’ is just a ‘Short Form’ comment starting with a phrase “A” and then a list of links which get auto-edited to append the next one.
(For less inflammatory ones, I think my personalized-wiki hybrid proposal works fine by clearly subordinating the user comments ‘replying’ to the tag and indicating responsibility & non-community-endorsement.)
Those are not randomly selected pairs, however. There are 3 major causal patterns: A->B, A<-B, and A<-C->B. Daecaneus is pointing out that for a random pair of correlations of some variables, we do not assign a uniform prior of 33% to each of these. While it may sound crazy to try to argue for some specific prior like ‘we should assign 1% to the direct causal patterns of A->B and A<-B, and 99% to the confounding pattern of A<-C->B’, this is a lot closer to the truth than thinking that ‘a third of the time, A causes B; a third of the time, B causes A; and the other third of the time, it’s just some confounder’.
What would be relevant there is “Everything is Correlated”. If you look at, say, Meehl’s examples of correlations from very large datasets, and ask about causality, I think it becomes clearer. Let’s take one of his first examples:
For example, only children are nearly twice as likely to be Presbyterian than Baptist in Minnesota, more than half of the Episcopalians “usually like school” but only 45% of Lutherans do, 55% of Presbyterians feel that their grades reflect their abilities as compared to only 47% of Episcopalians, and Episcopalians are more likely to be male whereas Baptists are more likely to be female.
Like, if you randomly assigned Baptist children to be converted to Presbyterianism, it seems unlikely that their school-liking will suddenly jump because they go somewhere else on Sunday, or that siblings will appear & vanish; it also seems unlikely that if they start liking school (maybe because of a nicer principal), that many of those children would spontaneously convert to Presbyterianism. Similarly, it seems rather unlikely that undergoing sexual-reassignment surgery will make Episcopalian men and Baptist women swap places, and it seems even more unlikely that their religious status caused their gender at conception. In all of these 5 cases, we are pretty sure that we can rule out one of the direct patterns, and that it was probably the third, and we could go through the rest of Meehl’s examples. (Indeed, this turns out to be a bad example because we can apply our knowledge that sex must have come many years before any other variable like “has cold hands” or “likes poetry” to rule out one pattern, but even so, we still don’t find any 50%s: it’s usually pretty obviously direct causation from the temporally earlier variable, or confounding, or both.)
So what I am doing in ‘How Often Does Correlation=Causality?’ is testing the claim that “yes, of course it would be absurd to take pairs of arbitrary variables and calculate their causal patterns for prior probabilities, because yeah, it would be low, maybe approaching 0 - but that’s irrelevant because that’s not what you or I are discussing when we discuss things like medicine. We’re discussing the good correlations, for interventions which have been filtered through the scientific process. All of the interventions we are discussing are clearly plausible and do not require time travel machines, usually have mechanisms proposed, have survived sophisticated statistical analysis which often controls for covariates or confounders, are regarded as credible by highly sophisticated credentialed experts like doctors or researchers with centuries of experience, and may even have had quasi-randomized or other kinds of experimental evidence; surely we can repose at least, say, 90% credibility, by the time that some drug or surgery or educational program has gotten that far and we’re reading about it in our favorite newspaper or blog? Being wrong 1 in 10 times would be painful, but it certainly doesn’t justify the sort of corrosive epistemological nihilism you seem to be espousing.”
But unfortunately, it seems that the error rate, after everything we humans can collectively do, is still a lot higher than 1 in 10 before the randomized version gets run. (Which implies that the scientific evidence is not very good in terms of providing enough Bayesian evidence to promote the hypothesis from <1% to >90%, or that it’s <<1% because causality is that rare.)
Classic ‘typical mind’ like experience: https://www.lesswrong.com/posts/baTWMegR42PAsH9qJ/generalizing-from-one-example
Key graph: https://arxiv.org/pdf/2404.04125#page=6
I don’t think they ‘consistently find’ anything, except about possibly CLIP (which we’ve known to have severe flaws ever since it came out, as expected from a contrastive loss, despite of course many extremely valuable uses). Unless I’m missing something in the transcript, Computerphile doesn’t mention this at all.
My earlier comment on meta-learning and Bayesian RL/inference for background: https://www.lesswrong.com/posts/TiBsZ9beNqDHEvXt4/how-we-picture-bayesian-agents?commentId=yhmoEbztTunQMRzJx
The main question I have been thinking about is what is a state for language and how that can be useful if so discovered in this way?
The way I would put it is that ‘state’ is misleading you here. It makes you think that it must be some sort of little Turing machine or clockwork, where it has a ‘state’, like the current state of the Turing machine tape or the rotations of each gear in a clockwork gadget, where the goal is to infer that. This is misleading, and it is a coincidence in these simple toy problems, which are so simple that there is nothing to know beyond the actual state.
As Ortega et al highlights in those graphs, what you are really trying to define is the sufficient statistics: the summary of the data (history) which is 100% adequate for decision making, and where additionally knowing the original raw data doesn’t help you.
In the coin flip case, the sufficient statistics are simply the 2-tuple (heads,tails), and you define a very simple decision over all of the possible observed 2-tuples. Note that the sufficient statistic is less information than the original raw “the history”, because you throw out the ordering. (A 2-tuple like ‘(3,1)’ is simpler than all of the histories it summarizes, like ‘[1,1,1,0]‘, ‘[0,1,1,1]‘. ‘[1,0,1,1]’, etc.) From the point of view of decision making, these all yield the same posterior distribution over the coin flip probability parameter, which is all you need for decision making (optimal action: ‘bet on the side with the higher probability’), and so that’s the sufficient statistic. If I tell you the history as a list instead of a 2-tuple, you cannot make better decisions. It just doesn’t matter if you got a tails first and then all heads, or all heads first then tails, etc.
It is not obvious that this is true: a priori, maybe that ordering was hugely important, and those correspond to different games. But the RNN there has learned that the differences are not important, and in fact, they are all the same.
And the 2-tuple here doesn’t correspond to any particular environment ‘state’. The environment doesn’t need to store that anywhere. The environment is just a RNG operating according to the coin flip probability, independently every turn of the game, with no memory. There is nowhere which is counting heads & tails in a 2-tuple. That exists solely in the RNN’s hidden state as it accumulates evidence over turns, and optimally updates priors to posteriors every observed coin flip, and possibly switches its bet.
So, in language tasks like LLMs, they are the same thing, but on a vastly grander scale, and still highly incomplete. They are (trying to) infer sufficient statistics of whatever language-games they have been trained on, and then predicting accordingly.
What are those sufficient statistics in LLMs? Hard to say. In that coinflip example, it is so simple that we can easily derive by hand the conjugate statistics and know it is just a binomial and so we only need to track heads/tails as the one and only sufficient statistic, and we can then look in the hidden state to find where that is encoded in a converged optimal agent. In LLMs… not so much. There’s a lot going on.
Based on interpretability research and studies of how well they simulate people as well as just all of the anecdotal experience with the base models, we can point to a few latents like honesty, calibration, demographics, and so on. (See Janus’s “Simulator Theory” for a more poetic take, less focused on agency than the straight Bayesian meta-imitation learning take I’m giving here.) Meanwhile, there are tons of things about the inputs that the model wants to throw away, irrelevant details like the exact mispellings of words in the prompt (while recording that there were mispellings, as grist for the inference mill about the environment generating the mispelled text).
So conceptually, the sufficient statistics when you or I punch in a prompt to GPT-3 might look like some extremely long list of variables like, “English speaker, Millennial, American, telling the truth, reliable, above-average intelligence, Common Crawl-like text not corrupted by WET processing, shortform, Markdown formatting, only 1 common typo or misspelling total, …” and it will then tailor responses accordingly and maximize its utility by predicting the next token accurately (because the ‘coin flip’ there is simply betting on the logits with the highest likelihood etc). Like the coinflip 2-tuple, most of these do not correspond to any real-world ‘state’: if you or I put in a prompt, there is no atom or set of atoms which corresponds to many of these variables. But they have consequences: if we ask about Tienanmen Square, for example, we’ll get a different answer than if we had asked in Mandarin, because the sufficient statistics there are inferred to be very different and yield a different array of latents which cause different outputs.
And that’s what “state” is for language: it is the model’s best attempt to infer a useful set of latent variables which collectively are sufficient statistics for whatever language-game or task or environment or agent-history or whatever the context/prompt encodes, which then supports optimal decision-making.
Rest assured, there is plenty that could leak at OA… (And might were there not NDAs, which of course is much of the point of having them.)
For a past example, note that no one knew that Sam Altman had been fired from YC CEO for similar reasons as OA CEO, until the extreme aggravating factor of the OA coup, 5 years later. That was certainly more than ‘run of the mill office politics’, I’m sure you’ll agree, but if that could be kept secret, surely lesser things now could be kept secret well past 2029?
In terms of factorizing or fingerprinting, 20 Tarot concepts seems like a lot; it’s exhausting even just to skim it. Why do you think you need so many and that they aren’t just mostly many fewer factors like Big Five or dirty uninterpretable mixes? Like the 500 closest tokens generally look pretty random to me for each one.
I think it is safe to infer from the conspicuous and repeated silence by ex-OA employees when asked whether they signed a NDA which also included a gag order about the NDA, that there is in fact an NDA with a gag order in it, presumably tied to the OA LLC PPUs (which are not real equity and so probably even less protected than usual).
For example, 70B model trained on next-token prediction only on the entire 20TB GenBank dataset will have better performance at next-nucleotide prediction than a 70B model that has been trained both on the 20TB GenBank dataset and on all 14TB of code on Github.
I don’t believe that’s obvious, and to the extent that it’s true, I think it’s largely irrelevant (and part of the general prejudice against scaling & Bitter Lesson thinking, where everyone is desperate to find an excuse for small specialist models with complicated structures & fancy inductive biases because that feels right).
Once you have a bunch of specialized models “the weights are identical” and “a fine tune can be applied to all members” no longer holds.
Nor do I see how this is relevant to your original claim. If you have lots of task-specialist models, how does this refute the claim that those will be able to coordinate? Of course they will. They will just share weight updates in exactly the way I just outlined, which works so well in practice. You may not be able to share parameter-updates across your protein-only and your Python-only LLMs, but they will be able to share updates within that model family and the original claim (“AGIs derived from the same model are likely to collaborate more effectively than humans because their weights are identical. Any fine-tune can be applied to all members, and text produced by one can be understood by all members.”) remains true, no matter how you swap out your definition of ‘model’.
DL models are fantastically good at collaborating and updating each other, in many ways completely impossible for humans, whether you are talking about AGI models or narrow specialist models.
You might find my notes of interest.
I think this only holds if fine tunes are composable, which as far as I can tell they aren’t
You know ‘finetunes are composable’, because a finetune is just a gradient descent step on a batch of data and a parameter update, and if you train on more than one GPU and share updates, DL training still works {{citation needed}}.
If you can train asynchronously on a thousand, or 20,000, or 100,000 GPUs, that is what you are doing; this is especially true in DRL, where you might be, say, training across 170,000 CPU-cores. This works because you don’t insist on everything being up to date every moment and you accept that there will be degrees of inconsistency/outdatedness. (You are certainly not accumulating the gradient across the entire cluster by waiting for every single node, pausing everything, calculating a single global step, and pushing it out, and only then resuming, as if it were a single GPU! Really, you don’t even want to do that on a single GPU for DRL if you gotta go fast.) This works so well that people will casually talk about training “an” AlphaZero, even though they actually mean something more like “the 512 separate instances of AlphaZero we are composing finetunes of” (or more).*
You do have issues with stale gradients and off-policyness of updates and how to best optimize throughput of all of the actors vs training nodes and push out model updates efficiently so nodes stop executing outdated parameters as quickly as possible, and DeepMind & OpenAI etc have done a lot of work on that—but at that point, as in the joke, you have conceded that finetunes are composable and you can keep a very large number of replicas in sync, and it is merely a matter of haggling over how much efficiency you lose.
Also note that it takes a lot less compute to keep a model up to date doing simple online learning on new data than it does to train it from scratch on all historical data summed together (obviously), so what devrandom is talking about is actually a lot easier than creating the model in the first place.
A better model to imagine is not “somehow finetunes from millions of independent models magically compose” (although actually they would compose pretty well), but more like, “millions of independent actors do their ordinary business, while spending their spare bandwidth downloading the latest binary delta from peer nodes (which due to sparsity & not falling too far out of sync, is always on the order of megabytes, not terabytes), and once every tens of thousands of forward passes, discover a novel or hard piece of data, and mail back a few kilobytes of text to the central training node of a few thousand GPUs, who are continually learning on the hard samples being passed back to them by the main fleet, and who keep pushing out an immediately updated model to all of the actor models, and so ‘the model’ is always up to date and no instance is more than hours out of date with ‘the model’ (aside from the usual long tail of stragglers or unhealthy nodes which will get reaped)”.
* I fear this is one of those cases where our casual reification of entities leads to poor intuitions, akin to asking ‘how many computers are in your computer you are using right now?‘; usually, the answer is just ‘1’, because really, who cares how exactly your ‘smartphone’ or ‘laptop’ or ‘desktop’ or ‘server’ is made up of a bunch of different pieces of silicon—unless you’re discussing something like device performance or security, in which case it may matter quite a lot and you’d better not think of yourself as owning ‘a’ smartphone.
(likely conditional on some aspects of the training setup, idk, self-supervised predictive loss function?)
Pretraining, specifically: https://gwern.net/doc/reinforcement-learning/meta-learning/continual-learning/index#scialom-et-al-2022-section
The intuition is that after pretraining, models can map new data into very efficient low-dimensional latents and have tons of free space / unused parameters. So you can easily prune them, but also easily specialize them with LoRA (because the sparsity is automatic, just learned) or just regular online SGD.
But yeah, it’s not a real problem anymore, and the continual learning research community is still in denial about this and confining itself to artificially tiny networks to keep the game going.
Altman made a Twitter-edit joke about ‘gpt-2 i mean gpt2’, so at this point, I think it’s just a funny troll-name related to the ‘v2 personality’ which makes it a successor to the ChatGPT ‘v1’, presumably, ‘personality’. See, it’s gptv2 geddit not gpt-2? very funny, everyone lol at troll
Sure, the poem prompt I mentioned using is like 3500 characters all on its own, and it had no issues repeatedly revising and printing out 4 new iterations of the poem without apparently forgetting when I used up my quota yesterday, so that convo must’ve been several thousand BPEs.
It definitely exceeds 1024 BPEs context (we wouldn’t be discussing it if it didn’t, I don’t think people even know how to write prompts that, combined with the system prompt etc, even fit in 1024 BPEs anymore), and it is almost certainly not GPT-2, come on.
And they already have a Sora clone called Vidu, for heaven’s sake.
No, they don’t. They have a video generation model, which is one of a great many published over the past few years as image generation increasingly became solved, such as Imagen Video or Phenaki from Google years ago, and the Vidu samples are clearly inferior to Sora (despite heavy emphasis on the ‘pan over static scene’ easy niche): https://www.youtube.com/watch?v=u1R-jxDPC70
Here we are in 2024, and we’re still being told how Real Soon Now Chinese DL will crush Westerners. I’ve been hearing this for almost a decade now, and I’ve stopped being impressed by the likes of Hsu talking about how “China graduates a million engineers a year!” or whatever. Somehow, the Next Big Thing never comes out of Chinese DL, no matter how many papers or citations or patents they have each year. Something to think about.
(I also have an ongoing Twitter series where every half year or so, I tweet a few of the frontier-pushing Western DL achievements, and I ask for merely 3 Chinese things as good—not better, just plausibly as good, including in retrospect from previous years. You know how many actual legitimate answers I’ve gotten? Like 1. Somehow, all the e/accs and China hawks like Alexandr Wang can’t seem to think of even a single one which was at or past the frontier, as opposed to the latest shiny ‘catches up to GPT-4!* * [on narrow benchmarks, YMMV]’ clone model.)
Nah, it’s just a PR stunt. Remember when DeepMind released AlphaGo Master by simply running a ‘Magister’ Go player online which went undefeated?* Everyone knew it was DeepMind simply because who else could it be? And IIRC, didn’t OA also pilot OA5 ‘anonymously’ on DoTA2 ladders? Or how about when Mistral released torrents? (If they had really wanted a blind test, they wouldn’t’ve called it “gpt2”, or they could’ve just rolled it out to a subset of ChatGPT users, who would have no way of knowing the model underneath the interface had been swapped out.)
* One downside of that covert testing: DM AFAIK never released a paper on AG Master, or all the complicated & interesting things they were trying before they hit upon the AlphaZero approach.
I ran out of tokens quickly trying out poetry but I didn’t get the impression that this is a big leap over GPT-4 like GPT-5 presumably is designed to be. (It could, I suppose, be a half-baked GPT-5 similar to ‘Prometheus’ for GPT-4.) My overall impression from poetry was that it was a GPT-4 which isn’t as RLHF-damaged as usual, and more like Claude in having a RLAIF-y creative style. So I could believe it’s a better GPT-4 where they are experimenting with new tuning/personality to reduce the ChatGPT-bureaucratese.
I’m not sure what “margin of error” is. This is just rounding, is it not? It’s not like the website is adding random epsilon numbers to screw with you: it is simply rounding off percentages. They are exact and deterministic up to rounding.
Though the bigger issue is the number of players can’t strictly be computed based on percentages alone.
Since it’s a non-deterministic problem, you’d represent all possible answers in ascending order as a lazy generator. You can then filter it by any known constraints or requirements (maybe you know players have to be paired, so it’s always an even number of players, and you can filter out all odd values). Since the number of possible valid values will increase greatly as the total N increases, this might get slow and require some sort of sieve. (At a guess, since it’s rounding, the range of possible values presumably increases rapidly, and so it might make more sense to instead returns pairs of (lower,upper) bounds?)
Personally, I would first start with the forward problem, since it is so simple. Then I could test any algorithms or tweaks by generating a random N of players and testing that all of the generator values ⇐ N are correct.
This has the benefit that you can also easily do simple Bayesian updates by ABC without the rigmarole of pymc or Stan etc: just draw a sample from the prior over the n players, feed it in, see if you replicate the exact observed %s, and if not, delete the sample; the samples you keep are the new posterior.
I have to say, I still don’t understand the cult of Roam or why people were so impressed by, eg. the
[[link]]
syntax borrowed from English Wikipedia (which introduced it something like 18 years before on what is still the most widely-read & edited wiki software in history), which you remark on repeatedly. Even in 2019 in beta it just seemed like a personal wiki, not much different from, say, PmWiki (2002) with some more emphasis than usual on the common backlink or ‘reverse citation’ functionality (that so many hypertext systems had supported going back decades in parallel with Xanadu ideas). It may be nicer than, say, English Wikipedia’s “WhatLinksHere” (which has been there since before I began using it early in the 2000s), but nothing to create a social-media cult over or sell “courses” about (!).But if the bubble has burst, it’s not hard to see why: any note-taking, personal knowledge management, or personal wiki system is inherently limited by the fact that they require a lot of work for what is, for most people, little gain. For most people, trying to track all of this stuff is as useful as exact itemized grocery store receipts from 5 years ago.
Most people simply have no need for lots of half-formed ideas, random lists of research papers, and so on. This is what people always miss about Zettelkasten: are you writing a book? Are you a historian or German scholar? Do you publish a dozen papers a year? No? Then why do you think you need a Zettelkasten? If you are going to be pulling out a decent chunk of those references for an essay or something, possibly decades from now, then it can be worth the upfront cost of entering references into your system, knowing that you’ll never use most of them and the benefit is mostly from the long tail, and you will, in the natural course of usage, periodically look over them to foster serendipity & creativity; if you aren’t writing all that, then there’s no long tail, no real benefit, no intrinsic review & serendipity, and it’s just a massive time & energy sink. Eventually, the user abandons it… and their life gets better.
Further, these systems are inherently passive, and force people to become secretaries, typists, reference librarians, archivists, & writers simply to keep it from rotting (quite aside from any mere software issue), to keep it up to date, revise tenses or references, fix spelling errors, deal with link rot, and so on. (Surprisingly, most people do not find that enjoyable.) There is no intelligence in such systems, and they don’t do anything. The user still has to do all the thinking, and it adds on a lot of thinking overhead.
So what comes after Roam and other personal systems which force the user to do all the thinking? I should think that would be obvious: systems which can think for the user instead. LLMs and other contemporary AI are wildly underused in the personal system space right now, and can potentially fix a lot of these issues, through approaches like actively surfacing connections instead of passively waiting for the user to make them on their own and manually record them, and can proactively suggest edits & updates & fixes that the user simply approves in batches. (Think of how much easier it is to copyedit a document using a spellcheck as a series of Y/N semi-automatic edits, than to go through it by eye, fixing typos.)
However, like most such paradigm shifts, it will be hard to tack it onto existing systems. You can’t reap the full benefits of LLMs with some tweaks like ‘let’s embed documents and add a little retrieval pane!’. You need to rethink the entire system and rewrite it from the ground up on the basis of making neural nets do as much as possible, to figure out the new capabilities and design patterns, and what to drop from the old obsolete personal wikis like Roam.
From what it sounds like, the Roam community would never stand for that, and I have a lot of doubts about whether it makes sense economically to try. It seems like if one wanted to do that, it would be better to start with a clean sheet (and an empty cap table).