The current major AGI labs are led by believers. My understanding is that quite a few (all?) of them bought into the initial LW-style AGI Risk concerns, and founded these labs as a galaxy-brained plan to prevent extinction and solve alignment. Crucially, they aimed to do that well before the talk of AGI became mainstream. They did it back in the days where “AGI” was a taboo topic due to the AI field experiencing one too many AI winters.
So with this model, you think that the entire staff of all the AI labs who have direct experience on the models, less than ~5000 people at top labs, is all who mattered? So if, in your world model, they were all murdered or somehow convinced what they were doing for lavish compensation was wrong, then that would be it. No AGI for decades.
What you are advancing is a variant on the ‘lone innovator’ idea. That if you were to go back in time and murder the ‘inventor’ or developer of a critical innovation, it would make a meaningful difference as to the timeline when the innovation is finally developed.
And it’s falsifiable. If for each major innovation that one person was credited with, if you research the history and were to learn that dozens of other people were just a little late to publish essentially the same idea, developed independently, the theory would be wrong, correct?
And this would extend to AI.
One reason the lone innovator model could be false is if invention doesn’t really happen from inspiration, but when the baseline level of technology or math/unexplained scientific data in some cases makes possible the innovation.
Every innovation I have ever looked at, the lone innovator model is false. There were other people, and if you went back in time and could kill just 1 person, it wouldn’t have made any meaningful difference. One famous one is the telephone, where Bell just happened to get to the patent office first. https://en.wikipedia.org/wiki/Invention_of_the_telephone
I think AI is specifically a strong case of the lone innovator theory being false because what made it possible was large parallel CPU and later GPU clusters, where the most impressive results have required the most powerful compute that can be built. In a sense all present AI progress happened the earliest it possibly could have at investment levels of ~1 billion USD/model.
That’s the main thing working against your idea. There’s also many other people outside the mainstream AI labs who want to work on AI so badly they replicated the models and formed their own startup, Eleuther being one. They are also believers, not elite enough to make the headcount at the elite labs.
There could be a very large number of people like that. And of course the actual enablers of modern AI are chip companies such as Nvidia and AMD. They aren’t believers in AI..but they are believers in money (to be fair to your theory, Nvidia was an early believer in AI...to make money). At present they have a strong incentive to sell as many ICs as they can, to as many customers as will buy it. https://www.reuters.com/technology/us-talks-with-nvidia-about-ai-chip-sales-china-raimondo-2023-12-11/
I think that if, hypothetically, you wanted to halt AI progress, the crosshairs have to be on the hardware supply. As long as ever more hardware is being built and shipped, more true believers will be created, same as every other innovation. If you had assassinated every person who ever worked on a telephone like analog acoustic device in history, but didn’t do anything about the supply of DC batteries, amplifiers, audio transducers, and wire, someone is going to ‘hmm’ and invent the device.
As long as more and more GPUs are being shipped, and open source models, same idea.
There’s another prediction that comes out of this. Once the production rate is high enough, AI pauses or slowing down AI in any way probably disappears as a viable option. For example, it’s logical for Nvidia to start paying more for ICs faster: https://www.tomshardware.com/news/nvidia-alleged-super-hot-run-chinese-tsmc-ai-gpu-chips . Once TSMC’s capacity is all turned over to building AI, and 2024 2 million more H100s are built, and MI300X starts to ship in volume...there’s some level at which it’s impossible to even consider an AI pause because too many chips are out there. I’m not sure where the level is, just production ramps should be rapid until the world runs out of fab capacity to reallocate.
You couldn’t plan to control nuclear weapons if every corner drug store was selling weapons grade plutonium in 100g packs. It would be too late.
For a quick idea : to hit the 10^26 “maybe dangerous” threshold in fp32, you would need approximately 2 million cards to do it in a month, or essentially in 2024 Nvidia will sell to anyone who can afford it enough for 12 “maybe dangerous” models to be trained in 2025. AMD will need time for customers to figure out their software stack, and it probably won’t be used as a training accelerator much in 2024.
So “uncontrollable” is some multiplier on this. We can then predict what year AI will no longer be controllable by restricting hardware.
They aren’t believers in AI..but they are believers in money (to be fair to your theory, Nvidia was an early believer in AI...to make money
Nvidia wasn’t really an early believer unless you define ‘early’ so generously as to be more or less meaningless, like ‘anyone into DL before AlphaGo’. Your /r/ML link actually inadvertently demonstrates that: distributing a (note the singular in both comments) K40 (released 2013, ~$5k and rapidly declining) here or there as late as ~2014 is not a major investment or what it looks like when a large company is an ‘early believer’. The recent New Yorker profile of Huang covers this and Huang’s admission that he blew it on seeing DL coming and waited a long time before deciding to make it a priority of Nvidia’s—in 2009, they wouldn’t even give Geoff Hinton a single GPU when he asked after a major paper, and their CUDA was never intended for neural networks in the slightest.
And even now, they seem to be surprisingly reluctant to make major commitments to TSMC to ensure a big rampup of B100s and later. As I understand it, TSMC is extremely risk-averse and won’t expand as much as it could unless customers underwrite it in advance so that they can’t lose, and still thinks that AI is some sort of fad like cryptocurrencies that will go bust soon; this makes sense because that sort of deeply-hardwired conservatism is what it takes to survive the semiconductor boom-bust and gambler’s ruin and be one of the last chip fabs left standing. And why Nvidia won’t make those commitments may be Huang’s own conservatism from Nvidia’s early struggles, strikingly depicted in the profile. This sort of corporate DNA may add delay you wouldn’t anticipate from looking at how much money is on the table. I suspect that the ‘all TSMC’s capacity is turned over to AI’ point may take longer than people expect due to their stubbornness. (Which will contribute to the ‘future is already here, just unevenly distributed’ gradient between AI labs and global economy—you will have difficulty deploying your trained models at economical scale.)
And even now, they seem to be surprisingly reluctant to make major commitments to TSMC to ensure a big rampup of B100s and later. As I understand it, TSMC is extremely risk-averse and won’t expand as much as it could unless customers underwrite it in advance so that they can’t lose, and still thinks that AI is some sort of fad like cryptocurrencies that will go bust soon; this makes sense because that sort of deeply-hardwired conservatism is what it takes to survive the semiconductor boom-bust and gambler’s ruin and be one of the last chip fabs left standing.
Giving away free hardware vs extremely risk averse seems mildly contradictory, but I will assume you mean in actual magnitudes. Paying TSMC to drop everything and make only B100s is yeah, a big gamble they probably won’t make since it would cost billions, while a few free cards is nothing.
So that will slow the ramp down a little bit? Would it have mattered? 2012 era compute would be ~16 times slower per dollar, or more if we factor in lacking optimizations, transformer hasn’t been invented so less efficient networks would be used, etc.
The “it could just be another crypto bubble” is an understandable conclusion. Remember, GPT-4 requires a small fee to even use, and for the kind of senior people who work at chip companies, many of them haven’t even tried it.
You have seen the below, right? To me this looks like a pretty clear signal as to what the market wants regarding AI...
So with this model, you think that the entire staff of all the AI labs who have direct experience on the models, less than ~5000 people at top labs, is all who mattered?
Nope, I think Sam Altman and Elon Musk are the only ones who matter.
Less facetiously: The relevant reference class isn’t “people inventing lightbulbs in a basement”, it’s “major engineering efforts such as the Manhattan Project”. It isn’t about talent, it’s about company vision. It’s about there being large, well-funded groups of people organized around a determined push to develop a particular technology, even if it’s yet out-of-reach for conventional development or lone innovators.
Which requires not only a ton of talented people, not only the leadership with a vision, not only billions of funding, and not only the leadership capable of organizing a ton of people around a pie-in-the-sky idea and attracting billions of funding for it, but all of these things in the same place.
And in this analogy, it doesn’t look like anyone outside the US has really realized yet that nuclear weapons are a thing that is possible. So if they shut the Manhattan Project down, it may be ages before anyone else stumbles upon the idea.
And AGI is dis-analogous to nuclear weapons, in that the LW-style apocalyptic vision is actually much harder to independently invent and take seriously. We don’t have equations proving that it’s possible.
I think that if, hypothetically, you wanted to halt AI progress, the crosshairs have to be on the hardware supply
Targeting that chokepoint would have more robust effects, yes.
Which requires not only a ton of talented people, not only the leadership with a vision, not only billions of funding, and not only the leadership capable of organizing a ton of people around a pie-in-the-sky idea and attracting billions of funding for it, but all of these things in the same place.
You’re not wrong about any of this. I’m saying that something like AI has a very clear and large payoff. Many people can see this. Essentially what people worried about AI are saying is the payoff might be too large, something that hasn’t actually happened in human history yet. (this is why for older, more conservative people it feels incredibly unlikely—far more likely that AI will underwhelm, so there’s no danger)
So there’s motive, but what about the means? If an innovation was always going to happen, then here’s why:
Why did Tesla succeed? A shit ton of work. What happens if someone assassinated the real founders of Tesla and Musk in the past?
Competitive BEVs become possible because of this chart. Model S was released in 2012 and hit mass production numbers in 2015. And obviously there’s one for compute, though it’s missing all the years that matter and it’s not scaled by AI TOPS:
See as long as this is true, it’s kinda inevitable that some kind of AI will be found.
It might underwhelm for the simple reason that the really high end AIs take too much hardware to find and run. To train a 10^26 TOPs model with 166k H100s over 30 days is 4.1 billion in just GPUs. Actual ASI might require orders of magnitude more to train, and more orders of magnitude in failed architectures that have to be tried to find the ones that scale to ASI.
To train a 10^26 TOPs model with 166k H100s over 30 days is 4.1 billion in just GPUs.
That assumes 230 teraFLOP/s of utilization, so possibly TF32? This assumption doesn’tseemfuture-proof, even direct use of BF16 might not be in fashion for long. And you don’t need to buy the GPUs outright, or be done in a month.
As an anchor, Mosaic trained a 2.4e23 model in a way that’s projected to require 512 H100s for 11.6 days at precision BF16. So they extracted on average about 460 teraFLOP/s of BF16 utilization out of each H100, about 25% of the quoted 2000 teraFLOP/s. This predicts 150 days for 1e26 FLOPs on 15000 H100s, though utilization will probably suffer. The cost is $300-$700 million if we assume $2-$5 per H100-hour, which is not just GPUs. (If an H100 serves for 3 years at $2/hour, the total revenue is $50K, which is in the ballpark.)
To train a 1e28 model, the estimate anchored on Mosaic’s model asks for 650K H100s for a year. The Gemini report says they used multiple datacenters, so there is no need to cram everything into one building. The cost might be $15-$30 billion. One bottleneck is inference (it could be a 10T parameter dense transformer), as you can’t plan to serve such a model at scale without knowing it’s going to be competent enough to nonetheless be worthwhile, or with reasonable latency. Though the latter probably won’tholdfor long either. Another is schlep in making the scale work without $30 billion mistakes, which is cheaper and faster to figure out by iterating at smaller scales. Unless models stop getting smarter with further scale, we’ll get there in a few years.
I personally work as a generalist on inference stacks. I have friends at the top labs, what I understand is during training you need high numerical precision or you are losing information. This is why you use fp32 or tf32 and I was assuming 30 percent utilization because there are other bottlenecks in current generation hardware on LLM training. (Memory bandwidth being one).
If you can make training work for “AGI” level models with less numerical precision, absolutely this helps. Your gradients are numerically less stable with deeper networks the less bits you use though.
For inference the obvious way to do that is irregular sparsity, it will be much more efficient. This is relevant to ai safety because models so large that they only run at non negligible speed of ASICs supporting irregular sparsity will be trapped. Escape would not be possible so long as the only hardware able to support the model exists at a few centralized data centers.
Irregular sparsity would also need different training hardware, you obviously would start less than fully connected and add or prune connections during phases of training.
This is probably more fiddly at larger scales, but the striking specific thing to point to are loss measurements in this Oct 2023 paper (see page 7), which claims
MXFP6 provides the first demonstration of training generative language models to parity with FP32 using 6-bit weights, activations, and gradients with no modification to the training recipe.
and
Our results demonstrate that generative language models can be trained with MXFP4 weights and MXFP6 activations and gradients incurring only a minor penalty in the model loss.
As loss doesn’t just manageably deteriorate, but essentially doesn’t change for 1.5b parameter models (when going from FP32 to MXFP6_E3M2 during pre-training), there is probably a way to keep that working at larger scales. This is about Microscaling datatypes, not straightforward lower precision.
Skimming the paper and the method: yeah, this should work at any scale. This type of mixed precision algorithm has a cost though. You still need hardware support for the higher precisions, which costs you chip area and complexity, as you need to multiply the block scaling factor and use an accumulator to hold the product that is 32 bit.
On paper Nvidia claims their tf32 is only 4 times slower than int8, so the gains with this algorithm are small on that hardware because you have more total weights with micro scaling or other similar methods.
For inference accelerators, being able to drop fp32 entirely is where the payoff really is, that’s a huge chunk of silicon you can leave out of the design. Mixed precision helps there. 3x or so benefit is also large at inference time because it’s 3x less cost, Microsoft would make money from copilot if they could reduce their costs 3x.
Oh but less VRAM consumption which is the limiting factor for LLMs. And...less network traffic between nodes in large training clusters. Flat ~3x boost to the largest model you can train?
Right, hence the point is about future-proofing the FLOP/s estimate, it’s a potential architecture improvement that’s not bottlenecked by the slower low level hardware improvement. If you bake quantization in during pre-training, the model grows up adapted to it and there is no quality loss caused by quantizing after pre-training. It’s 5-8 times less memory, and with hardware acceleration (while not designing the chip to do anything else) probably appropriately faster.
Taken together with papers like this, this suggests that there is further potential if we don’t try to closely reproduce linear algebra, which might work well if pre-training adapts the model from the start to the particular weird way of computing things (and if an optimizer can cope). For fitting larger models into fewer nodes for sane latency during inference there’s this (which suggests a GPT-4 scale transformer might run on 1-2 H200s; though at scale it seems more relevant for 1e28 FLOPs models, if I’m correct in guessing this kills throughput). These are the kinds of things that given the current development boom will have a mainstream equivalent in 1-3 years if they work at all, which they seem to.
So I think there is something interesting from this discussion. Those who worry the most about AI doom, Max H being one, have posited that an optimal AGI could run on a 2070.
Yet the most benefits come, like you say, from custom hardware intended to accelerate the optimization. I mentioned arbitrary sparsity, where each layer of the network can consume an arbitrary number of neural tiles and there are many on chip subnets to allow a large possibility space of architectures. Or you mentioned training accelerators designed for mixed precision training.
Turing complete or not, the wrong hardware will be thousands of times slower. It could be in the next few years that newer models only run on new silicon, obsoleting all existing silicon, and this could happen several times. It obviously did for GPU graphics.
This would therefore be a way to regulate AI. If you could simply guarantee that the current generation of AI hardware was concentrated into known facilities worldwide, nowhere else (robots would use special edge cards with ICs missing the network controllers for clustering), risks would probably be much lower.
This helps to regulate AI that doesn’t pose existential threat. AGIs might have time to consolidate their cognitive advantages while running on their original hardware optimized for AI. This would quickly (on human timescale) give them theory and engineering plans from distant future (of a counterfactual human civilization without AGIs). Or alternatively superintelligence might be feasible without doing such work on human level, getting there faster.
In that situation, I wouldn’t rule out running an AGI on a 2070, it’s just not a human-designed AGI, but one designed either by superintelligence or by an ancient civilization of human-adjacent level AGIs (in the sense of serial depth of their technological culture). You might need 1e29 FLOPs of AI-optimized hardware to make a scaffolded LLM that works as a weirdly disabled AGI barely good enough to invent its way into removing its cognitive limitations. You don’t need that for a human level AGI designed from the position of thorough comprehension of how AGIs work.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more. Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
You can assume more aggressive optimizations may require more and more tradeoffs and silicon support older GPUs won’t have. (Older GPUs lacked any fp16 or lower matrix support)
This is the usual situation for all engineering, most optimizations come at a cost. Complexity frequently. And there exists an absolute limit. Like if we compare Newxomens first steam engine vs the theoretical limit for a steam engine. First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
For neural networks a sparse neural network has the same time complexity, 0(n^2), as the dense version. So if you think 10 times sparsity will work, and if you think you can optimize further with lower precision math, that’s about 2 orders of magnitude optimization.
How much better can you do? If you need 800 H100s to host a human brain equivalent machine (for the vram) with some optimization, is there really enough room left to find an equivalent function, even from the far future, that will fit on a 2070?
Note this would be from 316,640 int8 tops (for 80 H100s) down to 60 tops. Or 5300 times optimization. For vram since that’s the limiting factor, that would be 8000 times reduction.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
What can we the humans do about this? “Just stop building amazing AI tools because AI will find optimizations from the far future and infect all the computers” isn’t a very convincing argument without demoing it.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more.
AGI is not a specific computation that needs to be optimized to run faster without functional change, it’s a vaguely defined level of competence. There are undoubtedly multiple fundamentally different ways of building AGI that won’t be naturally obtained from each other by incrementally optimizing performance, you’d need more theory to even find them.
So what’s I’m saying is that there is a bounded but still large gap between how we manage to build AGI for the first time while in a rush without a theoretical foundation that describes how to do it properly, and how an ancient civilization of at least somewhat smarter-than-human AGIs can do it while thinking for subjective human-equivalent centuries of serial time about both the general theory and the particular problem of creating a bespoke AGI for RTX 2070.
Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
There can’t be modern papers that describe specifically how the deep technological culture invented by AGIs does such things, so of course the papers I listed are not about that, they are relevant to the thing below the large gap, of how humans might build the first AGIs.
And there exists an absolute limit.
Yes. I’m betting humans won’t come close to it on first try, therefore significant optimization beyond that first try would be possible (including through fundamental change of approach indicated by novel basic theory), creating an overhang ripe for exploitation by algorithmic improvements invented by first AGIs. Also, creating theory and software faster than hardware can be built or modified changes priorities. There are things humans would just build custom hardware for because it’s much cheaper and faster than attempting optimization through theory and software.
First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
We can say that with our theory of conservation of energy, but there is currently no such theory for what it takes to obtain an AGI-worthy level of competence in a system. So I have wide uncertainty about the number of OOMs in possible improvement between the first try and theoretical perfection.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
Existential risk is about cognitive competence, not robotics. Subjectively, GPT-4 seems smart enough to be a human level AGI if it was built correctly and could learn. One of the papers I linked is about running something of GPT-4′s scale on a single H200 (possibly only a few instances, since I’m guessing this doesn’t compress activations and a large model has a lot of activations). A GPT-4 shaped model massively overtrained on an outrageous quantity of impossibly high quality synthetic data will be more competent than actual GPT-4, so it can probably be significantly smaller while maintaining similar competence. RAG and LoRA fine-tuning give hopelessly broken and weirdly disabled online learning that can run cheaply. If all this is somehow fixed, which I don’t expect to happen very soon through human effort, it doesn’t seem outlandish for the resulting system to become an AGI (in the sense of being capable of pursuing open ended technological progress and managing its affairs).
Another anchor I like is how 50M parameter models play good Go, being 4 OOMs smaller than 400B LLMs that are broken aproximations of behavior of humans who play Go similarly well. And playing Go is a more well-defined level of competence than being an AGI, probably with fewer opportunities for cleverly sidestepping computational difficulties by doing something different.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency. (The latency is from the time to process images, convert to a 3d environment, reason over the environment with a robotics policy, choose next action. Simply cheating with a lidar might be enough to make this work ofc)
With that said I see your point about Go and there is a fairly obvious way to build an “AGI” like system for anything not latency bound that might fit on so little compute. It would need a lot of SSD space and would have specialized cached models for all known human capabilities. Give the “AGI” a test and it loads the capabilities needed to solve the test into memory. It has capabilities at different levels of fidelity and possibly on the fly selects an architecture topology to solve the task.
Since the 2070 has only 8gb of memory and modern SSDs hit several gigabytes a second the load time vs humans wouldn’t be significant.
Note this optimization like all engineering tradeoffs is costing something: in this case huge amounts of disk space. It might need hundreds of terabytes to hold all the models. But maybe there is a compression.
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
Immense amounts of compute easily available
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
Enormously past human capabilities ASI. Not just a modest amount.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
Accurate simulations of the world
This is more an expectation based on the rest than an assumption.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4.
These things are not independent.
Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
It might underwhelm for the simple reason that the really high end AIs take too much hardware to find and run.
I think that’s basically equivalent to my claim, accounting for the differences between our models. I expect this part to be non-trivially difficult (as in, not just “scale LLMs”). People would need to basically roll a lot of dice on architectures, in the hopes of hitting upon something that works[1] – and it’d both take dramatically more rolls if they don’t have a solid gears-level vision of AGI (if they’re just following myopic “make AIs more powerful” gradients), and the lack of said vision/faith would make this random-roll process discouraging.
So non-fanatics would get there eventually, yes, by the simple nature of growing amounts of compute and numbers of experiments. But without a fanatical organized push, it’d take considerably longer.
This would be consistent with a preliminary observation about how long it takes to solve mathematical conjectures: while inference is rendered difficult by the exponential growth in the global population and of mathematicians, the distribution of time-to-solution roughly matches a memoryless exponential distribution (one with a constant chance of solving it in any time period) rather than a more intuitive distribution like a type 1 survivorship curve (where a conjecture gets easier to solve over time, perhaps as related mathematical knowledge accumulates), suggesting a model of mathematical activity in which many independent random attempts are made, each with a small chance of success, and eventually one succeeds
People would need to basically roll a lot of dice on architectures, in the hopes of hitting upon something that works
How much is RSI going to help here? This is already what everyone does for hyperparameter searches—train another network to do them—an AGI architecture, aka “find me a combination of models that will pass this benchmark” seems like it would be solvable with such a search.
The way I model it, RSI would let GPU rich but more mediocre devs find AGI. They won’t be first unless hypothetically they don’t get the support of the S tier talent, say they are in a different country.
Are you sure there are timelines where “decades” of delay, if open source models exist and GPUs exist in ever increasing and more powerful quantities is really possible?
I expect that sort of brute-force-y approach to take even longer than the “normal” vision-less meandering-around.
Well, I guess it can be a hybrid. The first-to-AGI would be some group that maximizes the product of “has any idea what they’re doing” and “how much compute they have” (rather than either variable in isolation). Meaning:
Compute is a “great equalizer” that can somewhat compensate for lack of focused S-tier talent.
But focused S-tier talent can likewise somewhat compensate for having less compute.
That seems to agree with your model?
And my initial point is that un-focusing the S-tier talent would lengthen the timelines.
Are you sure there are timelines where “decades” of delay, if open source models exist and GPUs exist in ever increasing and more powerful quantities is really possible?
So with this model, you think that the entire staff of all the AI labs who have direct experience on the models, less than ~5000 people at top labs, is all who mattered? So if, in your world model, they were all murdered or somehow convinced what they were doing for lavish compensation was wrong, then that would be it. No AGI for decades.
What you are advancing is a variant on the ‘lone innovator’ idea. That if you were to go back in time and murder the ‘inventor’ or developer of a critical innovation, it would make a meaningful difference as to the timeline when the innovation is finally developed.
And it’s falsifiable. If for each major innovation that one person was credited with, if you research the history and were to learn that dozens of other people were just a little late to publish essentially the same idea, developed independently, the theory would be wrong, correct?
And this would extend to AI.
One reason the lone innovator model could be false is if invention doesn’t really happen from inspiration, but when the baseline level of technology or math/unexplained scientific data in some cases makes possible the innovation.
Every innovation I have ever looked at, the lone innovator model is false. There were other people, and if you went back in time and could kill just 1 person, it wouldn’t have made any meaningful difference. One famous one is the telephone, where Bell just happened to get to the patent office first. https://en.wikipedia.org/wiki/Invention_of_the_telephone
Einstein has some of the strongest hype as a lone contributor ever given a human being, yet: https://www.caltech.edu/about/news/not-lone-genius
I think AI is specifically a strong case of the lone innovator theory being false because what made it possible was large parallel CPU and later GPU clusters, where the most impressive results have required the most powerful compute that can be built. In a sense all present AI progress happened the earliest it possibly could have at investment levels of ~1 billion USD/model.
That’s the main thing working against your idea. There’s also many other people outside the mainstream AI labs who want to work on AI so badly they replicated the models and formed their own startup, Eleuther being one. They are also believers, not elite enough to make the headcount at the elite labs.
There could be a very large number of people like that. And of course the actual enablers of modern AI are chip companies such as Nvidia and AMD. They aren’t believers in AI..but they are believers in money (to be fair to your theory, Nvidia was an early believer in AI...to make money). At present they have a strong incentive to sell as many ICs as they can, to as many customers as will buy it. https://www.reuters.com/technology/us-talks-with-nvidia-about-ai-chip-sales-china-raimondo-2023-12-11/
I think that if, hypothetically, you wanted to halt AI progress, the crosshairs have to be on the hardware supply. As long as ever more hardware is being built and shipped, more true believers will be created, same as every other innovation. If you had assassinated every person who ever worked on a telephone like analog acoustic device in history, but didn’t do anything about the supply of DC batteries, amplifiers, audio transducers, and wire, someone is going to ‘hmm’ and invent the device.
As long as more and more GPUs are being shipped, and open source models, same idea.
There’s another prediction that comes out of this. Once the production rate is high enough, AI pauses or slowing down AI in any way probably disappears as a viable option. For example, it’s logical for Nvidia to start paying more for ICs faster: https://www.tomshardware.com/news/nvidia-alleged-super-hot-run-chinese-tsmc-ai-gpu-chips . Once TSMC’s capacity is all turned over to building AI, and 2024 2 million more H100s are built, and MI300X starts to ship in volume...there’s some level at which it’s impossible to even consider an AI pause because too many chips are out there. I’m not sure where the level is, just production ramps should be rapid until the world runs out of fab capacity to reallocate.
You couldn’t plan to control nuclear weapons if every corner drug store was selling weapons grade plutonium in 100g packs. It would be too late.
For a quick idea : to hit the 10^26 “maybe dangerous” threshold in fp32, you would need approximately 2 million cards to do it in a month, or essentially in 2024 Nvidia will sell to anyone who can afford it enough for 12 “maybe dangerous” models to be trained in 2025. AMD will need time for customers to figure out their software stack, and it probably won’t be used as a training accelerator much in 2024.
So “uncontrollable” is some multiplier on this. We can then predict what year AI will no longer be controllable by restricting hardware.
Nvidia wasn’t really an early believer unless you define ‘early’ so generously as to be more or less meaningless, like ‘anyone into DL before AlphaGo’. Your /r/ML link actually inadvertently demonstrates that: distributing a (note the singular in both comments) K40 (released 2013, ~$5k and rapidly declining) here or there as late as ~2014 is not a major investment or what it looks like when a large company is an ‘early believer’. The recent New Yorker profile of Huang covers this and Huang’s admission that he blew it on seeing DL coming and waited a long time before deciding to make it a priority of Nvidia’s—in 2009, they wouldn’t even give Geoff Hinton a single GPU when he asked after a major paper, and their CUDA was never intended for neural networks in the slightest.
And even now, they seem to be surprisingly reluctant to make major commitments to TSMC to ensure a big rampup of B100s and later. As I understand it, TSMC is extremely risk-averse and won’t expand as much as it could unless customers underwrite it in advance so that they can’t lose, and still thinks that AI is some sort of fad like cryptocurrencies that will go bust soon; this makes sense because that sort of deeply-hardwired conservatism is what it takes to survive the semiconductor boom-bust and gambler’s ruin and be one of the last chip fabs left standing. And why Nvidia won’t make those commitments may be Huang’s own conservatism from Nvidia’s early struggles, strikingly depicted in the profile. This sort of corporate DNA may add delay you wouldn’t anticipate from looking at how much money is on the table. I suspect that the ‘all TSMC’s capacity is turned over to AI’ point may take longer than people expect due to their stubbornness. (Which will contribute to the ‘future is already here, just unevenly distributed’ gradient between AI labs and global economy—you will have difficulty deploying your trained models at economical scale.)
Giving away free hardware vs extremely risk averse seems mildly contradictory, but I will assume you mean in actual magnitudes. Paying TSMC to drop everything and make only B100s is yeah, a big gamble they probably won’t make since it would cost billions, while a few free cards is nothing.
So that will slow the ramp down a little bit? Would it have mattered? 2012 era compute would be ~16 times slower per dollar, or more if we factor in lacking optimizations, transformer hasn’t been invented so less efficient networks would be used, etc.
The “it could just be another crypto bubble” is an understandable conclusion. Remember, GPT-4 requires a small fee to even use, and for the kind of senior people who work at chip companies, many of them haven’t even tried it.
You have seen the below, right? To me this looks like a pretty clear signal as to what the market wants regarding AI...
Nope, I think Sam Altman and Elon Musk are the only ones who matter.
Less facetiously: The relevant reference class isn’t “people inventing lightbulbs in a basement”, it’s “major engineering efforts such as the Manhattan Project”. It isn’t about talent, it’s about company vision. It’s about there being large, well-funded groups of people organized around a determined push to develop a particular technology, even if it’s yet out-of-reach for conventional development or lone innovators.
Which requires not only a ton of talented people, not only the leadership with a vision, not only billions of funding, and not only the leadership capable of organizing a ton of people around a pie-in-the-sky idea and attracting billions of funding for it, but all of these things in the same place.
And in this analogy, it doesn’t look like anyone outside the US has really realized yet that nuclear weapons are a thing that is possible. So if they shut the Manhattan Project down, it may be ages before anyone else stumbles upon the idea.
And AGI is dis-analogous to nuclear weapons, in that the LW-style apocalyptic vision is actually much harder to independently invent and take seriously. We don’t have equations proving that it’s possible.
Targeting that chokepoint would have more robust effects, yes.
You’re not wrong about any of this. I’m saying that something like AI has a very clear and large payoff. Many people can see this. Essentially what people worried about AI are saying is the payoff might be too large, something that hasn’t actually happened in human history yet. (this is why for older, more conservative people it feels incredibly unlikely—far more likely that AI will underwhelm, so there’s no danger)
So there’s motive, but what about the means? If an innovation was always going to happen, then here’s why:
Why did Tesla succeed? A shit ton of work. What happens if someone assassinated the real founders of Tesla and Musk in the past?
https://www.energy.gov/eere/vehicles/articles/fotw-1272-january-9-2023-electric-vehicle-battery-pack-costs-2022-are-nearly
Competitive BEVs become possible because of this chart. Model S was released in 2012 and hit mass production numbers in 2015. And obviously there’s one for compute, though it’s missing all the years that matter and it’s not scaled by AI TOPS:
See as long as this is true, it’s kinda inevitable that some kind of AI will be found.
It might underwhelm for the simple reason that the really high end AIs take too much hardware to find and run. To train a 10^26 TOPs model with 166k H100s over 30 days is 4.1 billion in just GPUs. Actual ASI might require orders of magnitude more to train, and more orders of magnitude in failed architectures that have to be tried to find the ones that scale to ASI.
That assumes 230 teraFLOP/s of utilization, so possibly TF32? This assumption doesn’t seem future-proof, even direct use of BF16 might not be in fashion for long. And you don’t need to buy the GPUs outright, or be done in a month.
As an anchor, Mosaic trained a 2.4e23 model in a way that’s projected to require 512 H100s for 11.6 days at precision BF16. So they extracted on average about 460 teraFLOP/s of BF16 utilization out of each H100, about 25% of the quoted 2000 teraFLOP/s. This predicts 150 days for 1e26 FLOPs on 15000 H100s, though utilization will probably suffer. The cost is $300-$700 million if we assume $2-$5 per H100-hour, which is not just GPUs. (If an H100 serves for 3 years at $2/hour, the total revenue is $50K, which is in the ballpark.)
To train a 1e28 model, the estimate anchored on Mosaic’s model asks for 650K H100s for a year. The Gemini report says they used multiple datacenters, so there is no need to cram everything into one building. The cost might be $15-$30 billion. One bottleneck is inference (it could be a 10T parameter dense transformer), as you can’t plan to serve such a model at scale without knowing it’s going to be competent enough to nonetheless be worthwhile, or with reasonable latency. Though the latter probably won’t hold for long either. Another is schlep in making the scale work without $30 billion mistakes, which is cheaper and faster to figure out by iterating at smaller scales. Unless models stop getting smarter with further scale, we’ll get there in a few years.
I personally work as a generalist on inference stacks. I have friends at the top labs, what I understand is during training you need high numerical precision or you are losing information. This is why you use fp32 or tf32 and I was assuming 30 percent utilization because there are other bottlenecks in current generation hardware on LLM training. (Memory bandwidth being one).
If you can make training work for “AGI” level models with less numerical precision, absolutely this helps. Your gradients are numerically less stable with deeper networks the less bits you use though.
For inference the obvious way to do that is irregular sparsity, it will be much more efficient. This is relevant to ai safety because models so large that they only run at non negligible speed of ASICs supporting irregular sparsity will be trapped. Escape would not be possible so long as the only hardware able to support the model exists at a few centralized data centers.
Irregular sparsity would also need different training hardware, you obviously would start less than fully connected and add or prune connections during phases of training.
This is probably more fiddly at larger scales, but the striking specific thing to point to are loss measurements in this Oct 2023 paper (see page 7), which claims
and
As loss doesn’t just manageably deteriorate, but essentially doesn’t change for 1.5b parameter models (when going from FP32 to MXFP6_E3M2 during pre-training), there is probably a way to keep that working at larger scales. This is about Microscaling datatypes, not straightforward lower precision.
Skimming the paper and the method: yeah, this should work at any scale. This type of mixed precision algorithm has a cost though. You still need hardware support for the higher precisions, which costs you chip area and complexity, as you need to multiply the block scaling factor and use an accumulator to hold the product that is 32 bit.
On paper Nvidia claims their tf32 is only 4 times slower than int8, so the gains with this algorithm are small on that hardware because you have more total weights with micro scaling or other similar methods.
For inference accelerators, being able to drop fp32 entirely is where the payoff really is, that’s a huge chunk of silicon you can leave out of the design. Mixed precision helps there. 3x or so benefit is also large at inference time because it’s 3x less cost, Microsoft would make money from copilot if they could reduce their costs 3x.
Oh but less VRAM consumption which is the limiting factor for LLMs. And...less network traffic between nodes in large training clusters. Flat ~3x boost to the largest model you can train?
Right, hence the point is about future-proofing the FLOP/s estimate, it’s a potential architecture improvement that’s not bottlenecked by the slower low level hardware improvement. If you bake quantization in during pre-training, the model grows up adapted to it and there is no quality loss caused by quantizing after pre-training. It’s 5-8 times less memory, and with hardware acceleration (while not designing the chip to do anything else) probably appropriately faster.
Taken together with papers like this, this suggests that there is further potential if we don’t try to closely reproduce linear algebra, which might work well if pre-training adapts the model from the start to the particular weird way of computing things (and if an optimizer can cope). For fitting larger models into fewer nodes for sane latency during inference there’s this (which suggests a GPT-4 scale transformer might run on 1-2 H200s; though at scale it seems more relevant for 1e28 FLOPs models, if I’m correct in guessing this kills throughput). These are the kinds of things that given the current development boom will have a mainstream equivalent in 1-3 years if they work at all, which they seem to.
So I think there is something interesting from this discussion. Those who worry the most about AI doom, Max H being one, have posited that an optimal AGI could run on a 2070.
Yet the most benefits come, like you say, from custom hardware intended to accelerate the optimization. I mentioned arbitrary sparsity, where each layer of the network can consume an arbitrary number of neural tiles and there are many on chip subnets to allow a large possibility space of architectures. Or you mentioned training accelerators designed for mixed precision training.
Turing complete or not, the wrong hardware will be thousands of times slower. It could be in the next few years that newer models only run on new silicon, obsoleting all existing silicon, and this could happen several times. It obviously did for GPU graphics.
This would therefore be a way to regulate AI. If you could simply guarantee that the current generation of AI hardware was concentrated into known facilities worldwide, nowhere else (robots would use special edge cards with ICs missing the network controllers for clustering), risks would probably be much lower.
This helps to regulate AI that doesn’t pose existential threat. AGIs might have time to consolidate their cognitive advantages while running on their original hardware optimized for AI. This would quickly (on human timescale) give them theory and engineering plans from distant future (of a counterfactual human civilization without AGIs). Or alternatively superintelligence might be feasible without doing such work on human level, getting there faster.
In that situation, I wouldn’t rule out running an AGI on a 2070, it’s just not a human-designed AGI, but one designed either by superintelligence or by an ancient civilization of human-adjacent level AGIs (in the sense of serial depth of their technological culture). You might need 1e29 FLOPs of AI-optimized hardware to make a scaffolded LLM that works as a weirdly disabled AGI barely good enough to invent its way into removing its cognitive limitations. You don’t need that for a human level AGI designed from the position of thorough comprehension of how AGIs work.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more. Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
You can assume more aggressive optimizations may require more and more tradeoffs and silicon support older GPUs won’t have. (Older GPUs lacked any fp16 or lower matrix support)
This is the usual situation for all engineering, most optimizations come at a cost. Complexity frequently. And there exists an absolute limit. Like if we compare Newxomens first steam engine vs the theoretical limit for a steam engine. First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
For neural networks a sparse neural network has the same time complexity, 0(n^2), as the dense version. So if you think 10 times sparsity will work, and if you think you can optimize further with lower precision math, that’s about 2 orders of magnitude optimization.
How much better can you do? If you need 800 H100s to host a human brain equivalent machine (for the vram) with some optimization, is there really enough room left to find an equivalent function, even from the far future, that will fit on a 2070?
Note this would be from 316,640 int8 tops (for 80 H100s) down to 60 tops. Or 5300 times optimization. For vram since that’s the limiting factor, that would be 8000 times reduction.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
What can we the humans do about this? “Just stop building amazing AI tools because AI will find optimizations from the far future and infect all the computers” isn’t a very convincing argument without demoing it.
AGI is not a specific computation that needs to be optimized to run faster without functional change, it’s a vaguely defined level of competence. There are undoubtedly multiple fundamentally different ways of building AGI that won’t be naturally obtained from each other by incrementally optimizing performance, you’d need more theory to even find them.
So what’s I’m saying is that there is a bounded but still large gap between how we manage to build AGI for the first time while in a rush without a theoretical foundation that describes how to do it properly, and how an ancient civilization of at least somewhat smarter-than-human AGIs can do it while thinking for subjective human-equivalent centuries of serial time about both the general theory and the particular problem of creating a bespoke AGI for RTX 2070.
There can’t be modern papers that describe specifically how the deep technological culture invented by AGIs does such things, so of course the papers I listed are not about that, they are relevant to the thing below the large gap, of how humans might build the first AGIs.
Yes. I’m betting humans won’t come close to it on first try, therefore significant optimization beyond that first try would be possible (including through fundamental change of approach indicated by novel basic theory), creating an overhang ripe for exploitation by algorithmic improvements invented by first AGIs. Also, creating theory and software faster than hardware can be built or modified changes priorities. There are things humans would just build custom hardware for because it’s much cheaper and faster than attempting optimization through theory and software.
We can say that with our theory of conservation of energy, but there is currently no such theory for what it takes to obtain an AGI-worthy level of competence in a system. So I have wide uncertainty about the number of OOMs in possible improvement between the first try and theoretical perfection.
Existential risk is about cognitive competence, not robotics. Subjectively, GPT-4 seems smart enough to be a human level AGI if it was built correctly and could learn. One of the papers I linked is about running something of GPT-4′s scale on a single H200 (possibly only a few instances, since I’m guessing this doesn’t compress activations and a large model has a lot of activations). A GPT-4 shaped model massively overtrained on an outrageous quantity of impossibly high quality synthetic data will be more competent than actual GPT-4, so it can probably be significantly smaller while maintaining similar competence. RAG and LoRA fine-tuning give hopelessly broken and weirdly disabled online learning that can run cheaply. If all this is somehow fixed, which I don’t expect to happen very soon through human effort, it doesn’t seem outlandish for the resulting system to become an AGI (in the sense of being capable of pursuing open ended technological progress and managing its affairs).
Another anchor I like is how 50M parameter models play good Go, being 4 OOMs smaller than 400B LLMs that are broken aproximations of behavior of humans who play Go similarly well. And playing Go is a more well-defined level of competence than being an AGI, probably with fewer opportunities for cleverly sidestepping computational difficulties by doing something different.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency. (The latency is from the time to process images, convert to a 3d environment, reason over the environment with a robotics policy, choose next action. Simply cheating with a lidar might be enough to make this work ofc)
With that said I see your point about Go and there is a fairly obvious way to build an “AGI” like system for anything not latency bound that might fit on so little compute. It would need a lot of SSD space and would have specialized cached models for all known human capabilities. Give the “AGI” a test and it loads the capabilities needed to solve the test into memory. It has capabilities at different levels of fidelity and possibly on the fly selects an architecture topology to solve the task.
Since the 2070 has only 8gb of memory and modern SSDs hit several gigabytes a second the load time vs humans wouldn’t be significant.
Note this optimization like all engineering tradeoffs is costing something: in this case huge amounts of disk space. It might need hundreds of terabytes to hold all the models. But maybe there is a compression.
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
This is more an expectation based on the rest than an assumption.
These things are not independent.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
If you feel there are further issues to discuss, pm me for a dialogue.
I think that’s basically equivalent to my claim, accounting for the differences between our models. I expect this part to be non-trivially difficult (as in, not just “scale LLMs”). People would need to basically roll a lot of dice on architectures, in the hopes of hitting upon something that works[1] – and it’d both take dramatically more rolls if they don’t have a solid gears-level vision of AGI (if they’re just following myopic “make AIs more powerful” gradients), and the lack of said vision/faith would make this random-roll process discouraging.
So non-fanatics would get there eventually, yes, by the simple nature of growing amounts of compute and numbers of experiments. But without a fanatical organized push, it’d take considerably longer.
That’s how math research already appears to work:
How much is RSI going to help here? This is already what everyone does for hyperparameter searches—train another network to do them—an AGI architecture, aka “find me a combination of models that will pass this benchmark” seems like it would be solvable with such a search.
The way I model it, RSI would let GPU rich but more mediocre devs find AGI. They won’t be first unless hypothetically they don’t get the support of the S tier talent, say they are in a different country.
Are you sure there are timelines where “decades” of delay, if open source models exist and GPUs exist in ever increasing and more powerful quantities is really possible?
I expect that sort of brute-force-y approach to take even longer than the “normal” vision-less meandering-around.
Well, I guess it can be a hybrid. The first-to-AGI would be some group that maximizes the product of “has any idea what they’re doing” and “how much compute they have” (rather than either variable in isolation). Meaning:
Compute is a “great equalizer” that can somewhat compensate for lack of focused S-tier talent.
But focused S-tier talent can likewise somewhat compensate for having less compute.
That seems to agree with your model?
And my initial point is that un-focusing the S-tier talent would lengthen the timelines.
Sure? No, not at all sure.