I have a compute-market startup called vast.ai, and I’m working towards aligned AI. Currently seeking networking, collaborators, and hires—especially top notch cuda/gpu programmers.
My personal blog: https://entersingularity.wordpress.com/
I have a compute-market startup called vast.ai, and I’m working towards aligned AI. Currently seeking networking, collaborators, and hires—especially top notch cuda/gpu programmers.
My personal blog: https://entersingularity.wordpress.com/
Sorry, explain again why floods of neurotransmitter molecules bopping around are ideally thermodynamically efficient? You’re assuming that they’re trying to do multiplication out to 8-bit precision using analog quantities? Why suppose the 8-bit precision?
I’m not assuming that, but its nonetheless useful as a benchmark for comparison. It helps illustrate that 1e5 eV is really not much—it just allows a single 8-bit analog mult for example.
Earlier in the thread I said:
Now most synapses are probably smaller/cheaper than 8-bit equiv, but most of the energy cost involved is in pushing data down irreversible dissipative wires (just as true in the brain as it is in a GPU). Now add in the additional costs of synaptic adjustment machinery for learning, cell maintenance tax, dendritic computation, etc
The synapse is clearly doing something somewhat more complex than just analog multiplication.
And in terms of communication costs (which are paid at the synaptic junction for the synapse → dendrite → soma path), that 1e5 eV is only enough to carry a reliable 1 bit signal only about ~100mm (1e5 nm) distance through irreversible nano/micro scale wires (the wire bit energy for axons/dendrites and modern cmos is about the same).
Reversible interconnect is much more complex—requires communicating through fully isolated particles over the wire distance, which is obviously much more practical for photons for various reasons, but they are very large etc. Many complex tradeoffs.
Imprecisely multiplying two analog numbers should not require 10^5 times the minimum bit energy in a well-designed computer.
Much depends on your exact definition of ‘imprecisely’. But if we assume exactly 8-bit equivalent SNR, as I was using above, then you can lookup this question in the research literature and/or ask an LLM and the standard answer is in fact close to ~1e5 eV.
This multiplication op is a masking operation and not inherently reversible so it erases/destroys about 1⁄2 of the energy of the photonic input signal (100% if you multiply by 0, etc). So the min energy boils down to that required to represent an 8-bit number reliably as an analog signal (so for example you could convert a digital 8-bit signal to analog and back to digital losslessly all at the same standard sufficient 1eV reliability).
Analog signals effectively represent numbers as the 1st moment of a binomial distribution over carrier particles, and the information content is basically the entropy of a binomial over 1eV carriers which is ~0.5 log2(N) and thus N ~ 2^(2b) quanta for b bits of precision.
The energy to represent an analog signal doesn’t depend much on the medium—whether you are using photons or electrons/ions. The advantage of the electronic medium is the much smaller practical device dimensions possible when using much heavier/denser particles as bit carriers: 1eV photons are micrometer scale, much larger than the smallest synapses/transistors/biodevices. The obvious advantage of photons is their much higher transmission speed: thus they are used for longer range interconnect (but mostly only for distances larger than the brain radius).
Not really—its vector matrix multiplication, not matrix matrix mult.
This is the answer, but
Essentially, the brain is massively underclocked because of design-space restrictions imposed by biology and evolution
The main restriction is power efficiency: the brain provides a great deal of intelligence for a budget of only ~20 watts. Spreading out that power budget over a very wide memory operating at very slow speed just turns out to be the most power efficient design (vs a very small memory running at very high speed), because memory > time.
Would have made much more sense (visually and otherwise) to show graphs in log space. Example: https://www.openphilanthropy.org/research/modeling-the-human-trajectory/
The effectiveness of weight sharing (and parameter compression in general) diminishes as you move the domain from physics (simple rules/patterns tiled over all of space/time) up to language/knowledge (downstream facts/knowledge that are far too costly to rederive from simulation).
BNNs cant really take advantage of weight sharing so much, so ANNs that are closer to physics should be much smaller parameter wise, for the same compute and capability. Which is what we observer for lower level sensor/motor modalities.
The single factor prime causative factor driving the explosive growth in AI demand/revenue is and always has been the exponential reduction in $/flop via moore’s law, which simply is jevon’s paradox manifested. With more compute everything is increasingly easy and obvious; even idiots can create AGI with enough compute.
Abilities/intelligence come almost entirely from pretraining, so all the situation awareness and scheming capability that current (and future similar) frontier models possess is thus also mostly present in the base model.
Yes, but for scheming, we care about whether the AI can self-locate itself as an AI using its knowledge. The fact that (at a minimum) sampling from the system is required for it to self-locate as an AI might make a big difference here.
So if your ‘yes’ above is agreeing that capabilities—including scheming—come mostly from pretraining, then I don’t see how relevant it is whether or not that ability is actually used/executed much in pretraining, as the models we care about will go through post-training and I doubt you are arguing post-training will reliably remove scheming.
I also think it seems probably very hard to train a system capable of obsoleting top human experts which doesn’t understand that it is an AI even if you’re willing to take a big competitiveness hit.
Indeed but that is entirely the point—by construction!
Conceptually we have a recipe R (arch, algorithms, compute, etc), and a training dataset which we can parameterize by time cutoff T. Our objective (for safety research) is not to train a final agent, but instead to find a safe/good R with minimal capability penalty. All important results we care about vary with R independently of T, but competitiveness/dangerousness does vary strongly with T.
Take the same R but vary the time cutoff T of the training dataset: the dangerousness of the AI will depend heavily on T, but not the relative effectiveness of various configurations of R. That is simply a restatement of the ideal requirements for a safe experimental regime. Models/algos that work well for T of 1950 will also work for T of 2020 etc.
Training processes with varying (apparent) situational awareness
1:2.5 The AI seemingly isn’t aware it is an AI except for a small fraction of training which isn’t where much of the capabilities are coming from. For instance, the system is pretrained on next token prediction, our evidence strongly indicates that the system doesn’t know it is an AI when doing next token prediction (which likely requires being confident that it isn’t internally doing a substantial amount of general-purpose thinking about what to think about), and there is only a small RL process which isn’t where much of the capabilities are coming from.
Abilities/intelligence come almost entirely from pretraining, so all the situation awareness and scheming capability that current (and future similar) frontier models possess is thus also mostly present in the base model. The fact that you need to prompt them to summon out a situationally aware scheming agent doesn’t seem like much of a barrier, and indeed strong frontier base models are so obviously misaligned/jail-breakable/dangerous that releasing them to the public is PR-harmful enough to motivate RLHF post training purely for selfish profit-motives.
> This implies that restricting when AIs become (saliently) aware that they are an AI could be a promising intervention, to the extent this is possible without greatly reducing competitiveness.
Who cares if it greatly reduces competitiveness in experimental training runs?
We need to figure out how to align superhuman models—models trained with > 1e25 efficient flops on the current internet/knowledge, which requires experimental iteration. We probably won’t get multiple iteration attempts for aligning SI ‘in prod’, so we need to iterate in simulation (what you now call ‘model organisms’).
We need to find alignment training methods that work even when the agent has superhuman intelligence/inference. But ‘superhuman’ hear is relative—measured against our capabilities. The straightforward easy way to accomplish this is training agents in simulations with much earlier knowledge cutoff dates, which isn’t theoretically hard—just requires constructing augmented historical training datasets. So you could train on a 10T+ token dataset of human writings/thoughts with cutoff 2010, or 1950, or 1700, etc. These base models wouldn’t be capable of simulating/summoning realistic situationally aware agents, their RL derived agents wouldn’t be situationally sim-aware either, etc.
Input vs output tokens are both unique per agent history (prompt + output), so that differentiation doesn’t matter for my core argument about the RAM constraint. If you have a model which needs 1TB of KV cache, and you aren’t magically sharing that significantly between instances, then you’ll need at least 1000 * 1TB of RAM to run 1000 inferences in parallel.
The 3x − 10x cost ratio model providers charge is an economic observation that tells us something about the current cost vs utility tradeoffs, but it’s much complicated by oversimpliciation of the current pricing models (they are not currently charging their true costs, probably because that would be too complicated, but also perhaps reveal too much information—their true cost would be more like charging rent on RAM for every timestep). It just tells you that very roughly, that on average, the mean (averaged over many customer requests) flop utilization of the generation phase (parallel over instances) is perhaps 3x to 10x lower than the prefill phase (parallel over time) - but it doesn’t directly tell you why.
This is all downstream dependent on model design and economics. There are many useful requests that LLMs can fulfill without using barely any KV cache—essentially all google/oracle type use cases where you are just asking the distilled wisdom of the internet a question. If those were all of the request volume, then the KV cache RAM per instance would be inconsequential, inference batch sizes would be > 1000, inference flop utilization would be the same for prefill vs generation, and providers would charge the same price for input vs output tokens.
On the other extreme, if all requests used up the full training context window, then the flop utilization of inference would be constrained by approximately (max_KV_cache_RAM + weight_RAM / max_KV_cache_RAM ) / alu_ratio. For example if the KV cache is 10% of RAM, and alu_ratio is 1000:1, generation would have max efficiency of 1%. If infill efficiency was 30%, then output tokens would presumably be priced 30x more than input tokens.
So the observed input:output token pricing is dependent on the combination of KV_cache RAM fraction (largely a model design decision), current efficiency of implementations of infill vs generation, and most importantly—the distribution of request prompt lengths, which itself is dependent on the current economic utility of shorter vs longer prompts for current models.
In practice most current models have a much smaller KV cache to weight RAM fraction than my simple 1:1 example, but the basic point holds: training is more flop & interconnect limited, inference is more RAM and ram bw limited. But these constraints already shape the design space of models and how they are deployed.
LLMs currently excel at anything a human knowledge worker can do without any specific training (minimal input prompt length), but largely aren’t yet competitive with human experts at most real world economic tasks that require significant unique per-job training. Coding is a good example—human thoughtspeed is roughly 9 token/s, or 32K/hour, or 256K per 8 hour work day, or roughly 1M tokens per week.
Current GPT4-turbo (one of the current leaders for coding), for example, has a max context length of 128K (roughly 4 hours). But if you actually use all of that for each request for typical coding requests that generate say 1K of useful output (equivalent to a few minutes of human thought), that will cost you about $1.25 for the input tokens, but only about $0.03 for the output tokens. That costs about as much as a human worker, per minute of output thought tokens. The cost of any LLM agent today (per minute of output thought) increases linearily with input prompt length—ie the agent’s unique differentiating short term memory. Absent more sophisticated algorithms, the cost of running a react-like LLM agent thus grows quadratically with time, vs linear for humans (because each small observe-act time step has cost proportional to input context length, which grows per time step).
Human programmers aren’t being replaced en masse (yet) in part because current models aren’t especially smarter than humans at equivalent levels of job-specific knowledge/training.
Normalized for similar ability, LLMs currently are cheaper than humans at most any knowledge work that requires very little job-specific knowledge/training, and much more expensive than humans for tasks that require extensive job-specific knowledge/training—and this has everything to do with how transformers currently consume and utilize VRAM.
Not for transformers, for which training and inference are fundamentally different.
Transformer training parallelizes over time, but that isn’t feasible for inference. So transformer inference backends have to parallelize over batch/space, just like RNNs, which is enormously less efficient in RAM and RAM bandwidth use.
So if you had a large attention model that uses say 1TB of KV cache (fast weights) and 1TB of slow weights, transformer training can often run near full efficiency, flop limited, parallelizing over time.
But similar full efficient transformer inference would require running about K instances/agents in parallel, where K is the flop/mem_bw ratio (currently up to 1000 on H100). So that would be 1000 * 1TB of RAM for the KV cache (fast weights) as its unique per agent instance.
This, in a nutshell, is part of why we don’t already have AGI. Transformers are super efficient at absorbing book knowledge, but just as inefficient as RNNs at inference (generating new experiences, which is a key bottleneck on learning from experience).
Thus there is of course much research in more efficient long kv cache, tree/graph inference that can share some of the KV cache over similar branching agents, etc
Due to practical reasons, the compute requirements for training LLMs is several orders of magnitude larger than what is required for running a single inference instance. In particular, a single NVIDIA H100 GPU can run inference at a throughput of about 2000 tokens/s, while Meta trained Llama3 70B on a GPU cluster[1] of about 24,000 GPUs. Assuming we require a performance of 40 tokens/s, the training cluster can run concurrent instances of the resulting 70B model.
I agree direction-ally with your headline, but your analysis here assumes flops is the primary constraint on inference scaling. Actually it looks like VRAM is already the more important constraint, and would likely become even more dominant if AGI requires more brain-like models.
LLMs need VRAM for both ‘static’ and ‘dynamic’ weights. The static weights are the output of the long training process, and shared over all instances of the same model or fine tune (LORAs share most). However the dynamic ‘weights’ - in the attention KV cache—are essentially unique to each individual instance of the model, specific to its current working memory context and chain of thought.
So the key parameters here are total model size and dynamic vs static ratio (which depends heavily on context length and many other factors). But for example if dynamic is 50% of the RAM usage then 1M concurrent instances would require almost as many GPUs.
If AGI requires scaling up to very large brain-size models ~100T params (which seems likely), and the dynamic ratio is even just 1%, then 1M concurrent instances would require on order 10M GPUs.
How is that even remotely relevant? Humans and AIs learn the same way, via language—and its not like this learning process fails just because language undersamples thoughts.
As the article points out, shared biological needs do not much deter the bear or chimpanzee from killing you. An AI could be perfectly human—the very opposite of alien—and far more dangerous than Hitler or Dhamer.
The article is well written but dangerously wrong in its core point. AI will be far more human than alien. But alignment/altruism is mostly orthogonal to human vs alien.
We are definitely not training AIs on human thoughts because language is an expression of thought, not thought itself.
Even if training on language was not equivalent to training on thoughts, that would also apply to humans.
But it also seems false in the same way that “we are definitely not training AI’s on reality because image files are compressed sampled expressions of images, not reality itself” is false.
Approximate bayesian inference (ie DL) can infer the structure of a function through its outputs; the structure of the 3D world through images; and thoughts through language.
Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans.
Distinct alien species arise only from distinct separated evolutionary histories. Your example of the aliens from Arrival are indeed a good (hypothetical) example of truly alien minds resulting from a completely independent evolutionary history on an alien world. Any commonalities between us and them would be solely the result of convergent evolutionary features. They would have completely different languages, cultures, etc.
AI is not alien at all, as we literally train AI on human thoughts. As a result we constrain our AI systems profoundly, creating them in our mental image. Any AGI we create will inevitably be far closer to human uploads than alien minds. This a prediction Moravec made as early as 1988 (Mind Children) - now largely fulfilled by the strong circuit convergence/correspondence between modern AI and brains.
Minds are software mental constructs, and alien minds would require alien culture. Instead we are simply creating new hardware for our existing (cultural) mind software.
I also not sure of the relevance and not following the thread fully, but the summary of that experiment is that it takes some time (measured in nights of sleep which are rough equivalent of big batch training updates) for the newly sighted to develop vision, but less time than infants—presumably because the newly sighted already have full functioning sensor inference world models in another modality that can speed up learning through dense top down priors.
But its way way more than “grok it really fast with just a few examples”—training their new visual systems still takes non-trivial training data & time
I suspect that much of the appeal of shard theory is working through detailed explanations of model-free RL with general value function approximation for people who mostly think of AI in terms of planning/search/consequentialism.
But if you already come from a model-free RL value approx perspective, shard theory seems more natural.
Moment to moment decisions are made based on value-function bids, with little to no direct connection to reward or terminal values. The ‘shards’ are just what learned value-function approximating subcircuits look like in gory detail.
The brain may have a prior towards planning subcircuitry, but even without a strong prior planning submodules will eventually emerge naturally in a model-free RL learning machine of sufficient scale (there is no fundamental difference between model-free and model-based for universal learners). TD like updates ensure that the value function extends over longer timescales as training progresses. (and in general humans seem to plan on timescales which scale with their lifespan, as you’d expect)
TSMC 4N is a little over 1e10 transistors/cm^2 for GPUs and roughly 5e^-18 J switch energy assuming dense activity (little dark silicon). The practical transistor density limit with minimal few electron transistors is somewhere around ~5e11 trans/cm^2, but the minimal viable high speed switching energy is around ~2e^-18J. So there is another 1 to 2 OOM further density scaling, but less room for further switching energy reduction. Thus scaling past this point increasingly involves dark silicon or complex expensive cooling and thus diminishing returns either way.
Achieving 1e-15 J/flop seems doable now for low precision flops (fp4, perhaps fp8 with some tricks/tradeoffs); most of the cost is data movement as pulling even a single bit from RAM just 1 cm away costs around 1e-12J.
No—Coax cables are enormous in radius (EM wavelengths), and do not achieve much better than 1 eV / nm in practice. In the same waveguide radius you can you just remove the copper filler and go pure optical and then get significantly below 1 eV/nm anyway—so why even mention coax?
The only thing that was ‘debunked’ was in a tangent conversation that had no bearing on the main point (about nanoscale wire interconnect smaller than EM wavelength—which is irreversible and consumes close to 1 eV/nm in both brains and computers), and it was just my initial conception that coax cables could be modeled in simplification as relays like RC interconnect.
There are many complex tradeoffs between size, speed, energy, etc. Reversible and irreversible comms occupy different regions of that pareto surface. Reversible communication is isomorphic to transmitting particles—in practice always photons—and requires complex/large transmitter/receivers and photon sized waveguides etc. Irreversible communication is isomorphic to domino-based computing, and has the advantage—and cost—of full error correction/erasure at every cycle, and easier to guide down narrow and complex paths.