This is probably more fiddly at larger scales, but the striking specific thing to point to are loss measurements in this Oct 2023 paper (see page 7), which claims
MXFP6 provides the first demonstration of training generative language models to parity with FP32 using 6-bit weights, activations, and gradients with no modification to the training recipe.
and
Our results demonstrate that generative language models can be trained with MXFP4 weights and MXFP6 activations and gradients incurring only a minor penalty in the model loss.
As loss doesn’t just manageably deteriorate, but essentially doesn’t change for 1.5b parameter models (when going from FP32 to MXFP6_E3M2 during pre-training), there is probably a way to keep that working at larger scales. This is about Microscaling datatypes, not straightforward lower precision.
Skimming the paper and the method: yeah, this should work at any scale. This type of mixed precision algorithm has a cost though. You still need hardware support for the higher precisions, which costs you chip area and complexity, as you need to multiply the block scaling factor and use an accumulator to hold the product that is 32 bit.
On paper Nvidia claims their tf32 is only 4 times slower than int8, so the gains with this algorithm are small on that hardware because you have more total weights with micro scaling or other similar methods.
For inference accelerators, being able to drop fp32 entirely is where the payoff really is, that’s a huge chunk of silicon you can leave out of the design. Mixed precision helps there. 3x or so benefit is also large at inference time because it’s 3x less cost, Microsoft would make money from copilot if they could reduce their costs 3x.
Oh but less VRAM consumption which is the limiting factor for LLMs. And...less network traffic between nodes in large training clusters. Flat ~3x boost to the largest model you can train?
Right, hence the point is about future-proofing the FLOP/s estimate, it’s a potential architecture improvement that’s not bottlenecked by the slower low level hardware improvement. If you bake quantization in during pre-training, the model grows up adapted to it and there is no quality loss caused by quantizing after pre-training. It’s 5-8 times less memory, and with hardware acceleration (while not designing the chip to do anything else) probably appropriately faster.
Taken together with papers like this, this suggests that there is further potential if we don’t try to closely reproduce linear algebra, which might work well if pre-training adapts the model from the start to the particular weird way of computing things (and if an optimizer can cope). For fitting larger models into fewer nodes for sane latency during inference there’s this (which suggests a GPT-4 scale transformer might run on 1-2 H200s; though at scale it seems more relevant for 1e28 FLOPs models, if I’m correct in guessing this kills throughput). These are the kinds of things that given the current development boom will have a mainstream equivalent in 1-3 years if they work at all, which they seem to.
So I think there is something interesting from this discussion. Those who worry the most about AI doom, Max H being one, have posited that an optimal AGI could run on a 2070.
Yet the most benefits come, like you say, from custom hardware intended to accelerate the optimization. I mentioned arbitrary sparsity, where each layer of the network can consume an arbitrary number of neural tiles and there are many on chip subnets to allow a large possibility space of architectures. Or you mentioned training accelerators designed for mixed precision training.
Turing complete or not, the wrong hardware will be thousands of times slower. It could be in the next few years that newer models only run on new silicon, obsoleting all existing silicon, and this could happen several times. It obviously did for GPU graphics.
This would therefore be a way to regulate AI. If you could simply guarantee that the current generation of AI hardware was concentrated into known facilities worldwide, nowhere else (robots would use special edge cards with ICs missing the network controllers for clustering), risks would probably be much lower.
This helps to regulate AI that doesn’t pose existential threat. AGIs might have time to consolidate their cognitive advantages while running on their original hardware optimized for AI. This would quickly (on human timescale) give them theory and engineering plans from distant future (of a counterfactual human civilization without AGIs). Or alternatively superintelligence might be feasible without doing such work on human level, getting there faster.
In that situation, I wouldn’t rule out running an AGI on a 2070, it’s just not a human-designed AGI, but one designed either by superintelligence or by an ancient civilization of human-adjacent level AGIs (in the sense of serial depth of their technological culture). You might need 1e29 FLOPs of AI-optimized hardware to make a scaffolded LLM that works as a weirdly disabled AGI barely good enough to invent its way into removing its cognitive limitations. You don’t need that for a human level AGI designed from the position of thorough comprehension of how AGIs work.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more. Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
You can assume more aggressive optimizations may require more and more tradeoffs and silicon support older GPUs won’t have. (Older GPUs lacked any fp16 or lower matrix support)
This is the usual situation for all engineering, most optimizations come at a cost. Complexity frequently. And there exists an absolute limit. Like if we compare Newxomens first steam engine vs the theoretical limit for a steam engine. First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
For neural networks a sparse neural network has the same time complexity, 0(n^2), as the dense version. So if you think 10 times sparsity will work, and if you think you can optimize further with lower precision math, that’s about 2 orders of magnitude optimization.
How much better can you do? If you need 800 H100s to host a human brain equivalent machine (for the vram) with some optimization, is there really enough room left to find an equivalent function, even from the far future, that will fit on a 2070?
Note this would be from 316,640 int8 tops (for 80 H100s) down to 60 tops. Or 5300 times optimization. For vram since that’s the limiting factor, that would be 8000 times reduction.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
What can we the humans do about this? “Just stop building amazing AI tools because AI will find optimizations from the far future and infect all the computers” isn’t a very convincing argument without demoing it.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more.
AGI is not a specific computation that needs to be optimized to run faster without functional change, it’s a vaguely defined level of competence. There are undoubtedly multiple fundamentally different ways of building AGI that won’t be naturally obtained from each other by incrementally optimizing performance, you’d need more theory to even find them.
So what’s I’m saying is that there is a bounded but still large gap between how we manage to build AGI for the first time while in a rush without a theoretical foundation that describes how to do it properly, and how an ancient civilization of at least somewhat smarter-than-human AGIs can do it while thinking for subjective human-equivalent centuries of serial time about both the general theory and the particular problem of creating a bespoke AGI for RTX 2070.
Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
There can’t be modern papers that describe specifically how the deep technological culture invented by AGIs does such things, so of course the papers I listed are not about that, they are relevant to the thing below the large gap, of how humans might build the first AGIs.
And there exists an absolute limit.
Yes. I’m betting humans won’t come close to it on first try, therefore significant optimization beyond that first try would be possible (including through fundamental change of approach indicated by novel basic theory), creating an overhang ripe for exploitation by algorithmic improvements invented by first AGIs. Also, creating theory and software faster than hardware can be built or modified changes priorities. There are things humans would just build custom hardware for because it’s much cheaper and faster than attempting optimization through theory and software.
First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
We can say that with our theory of conservation of energy, but there is currently no such theory for what it takes to obtain an AGI-worthy level of competence in a system. So I have wide uncertainty about the number of OOMs in possible improvement between the first try and theoretical perfection.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
Existential risk is about cognitive competence, not robotics. Subjectively, GPT-4 seems smart enough to be a human level AGI if it was built correctly and could learn. One of the papers I linked is about running something of GPT-4′s scale on a single H200 (possibly only a few instances, since I’m guessing this doesn’t compress activations and a large model has a lot of activations). A GPT-4 shaped model massively overtrained on an outrageous quantity of impossibly high quality synthetic data will be more competent than actual GPT-4, so it can probably be significantly smaller while maintaining similar competence. RAG and LoRA fine-tuning give hopelessly broken and weirdly disabled online learning that can run cheaply. If all this is somehow fixed, which I don’t expect to happen very soon through human effort, it doesn’t seem outlandish for the resulting system to become an AGI (in the sense of being capable of pursuing open ended technological progress and managing its affairs).
Another anchor I like is how 50M parameter models play good Go, being 4 OOMs smaller than 400B LLMs that are broken aproximations of behavior of humans who play Go similarly well. And playing Go is a more well-defined level of competence than being an AGI, probably with fewer opportunities for cleverly sidestepping computational difficulties by doing something different.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency. (The latency is from the time to process images, convert to a 3d environment, reason over the environment with a robotics policy, choose next action. Simply cheating with a lidar might be enough to make this work ofc)
With that said I see your point about Go and there is a fairly obvious way to build an “AGI” like system for anything not latency bound that might fit on so little compute. It would need a lot of SSD space and would have specialized cached models for all known human capabilities. Give the “AGI” a test and it loads the capabilities needed to solve the test into memory. It has capabilities at different levels of fidelity and possibly on the fly selects an architecture topology to solve the task.
Since the 2070 has only 8gb of memory and modern SSDs hit several gigabytes a second the load time vs humans wouldn’t be significant.
Note this optimization like all engineering tradeoffs is costing something: in this case huge amounts of disk space. It might need hundreds of terabytes to hold all the models. But maybe there is a compression.
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
Immense amounts of compute easily available
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
Enormously past human capabilities ASI. Not just a modest amount.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
Accurate simulations of the world
This is more an expectation based on the rest than an assumption.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4.
These things are not independent.
Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
This is probably more fiddly at larger scales, but the striking specific thing to point to are loss measurements in this Oct 2023 paper (see page 7), which claims
and
As loss doesn’t just manageably deteriorate, but essentially doesn’t change for 1.5b parameter models (when going from FP32 to MXFP6_E3M2 during pre-training), there is probably a way to keep that working at larger scales. This is about Microscaling datatypes, not straightforward lower precision.
Skimming the paper and the method: yeah, this should work at any scale. This type of mixed precision algorithm has a cost though. You still need hardware support for the higher precisions, which costs you chip area and complexity, as you need to multiply the block scaling factor and use an accumulator to hold the product that is 32 bit.
On paper Nvidia claims their tf32 is only 4 times slower than int8, so the gains with this algorithm are small on that hardware because you have more total weights with micro scaling or other similar methods.
For inference accelerators, being able to drop fp32 entirely is where the payoff really is, that’s a huge chunk of silicon you can leave out of the design. Mixed precision helps there. 3x or so benefit is also large at inference time because it’s 3x less cost, Microsoft would make money from copilot if they could reduce their costs 3x.
Oh but less VRAM consumption which is the limiting factor for LLMs. And...less network traffic between nodes in large training clusters. Flat ~3x boost to the largest model you can train?
Right, hence the point is about future-proofing the FLOP/s estimate, it’s a potential architecture improvement that’s not bottlenecked by the slower low level hardware improvement. If you bake quantization in during pre-training, the model grows up adapted to it and there is no quality loss caused by quantizing after pre-training. It’s 5-8 times less memory, and with hardware acceleration (while not designing the chip to do anything else) probably appropriately faster.
Taken together with papers like this, this suggests that there is further potential if we don’t try to closely reproduce linear algebra, which might work well if pre-training adapts the model from the start to the particular weird way of computing things (and if an optimizer can cope). For fitting larger models into fewer nodes for sane latency during inference there’s this (which suggests a GPT-4 scale transformer might run on 1-2 H200s; though at scale it seems more relevant for 1e28 FLOPs models, if I’m correct in guessing this kills throughput). These are the kinds of things that given the current development boom will have a mainstream equivalent in 1-3 years if they work at all, which they seem to.
So I think there is something interesting from this discussion. Those who worry the most about AI doom, Max H being one, have posited that an optimal AGI could run on a 2070.
Yet the most benefits come, like you say, from custom hardware intended to accelerate the optimization. I mentioned arbitrary sparsity, where each layer of the network can consume an arbitrary number of neural tiles and there are many on chip subnets to allow a large possibility space of architectures. Or you mentioned training accelerators designed for mixed precision training.
Turing complete or not, the wrong hardware will be thousands of times slower. It could be in the next few years that newer models only run on new silicon, obsoleting all existing silicon, and this could happen several times. It obviously did for GPU graphics.
This would therefore be a way to regulate AI. If you could simply guarantee that the current generation of AI hardware was concentrated into known facilities worldwide, nowhere else (robots would use special edge cards with ICs missing the network controllers for clustering), risks would probably be much lower.
This helps to regulate AI that doesn’t pose existential threat. AGIs might have time to consolidate their cognitive advantages while running on their original hardware optimized for AI. This would quickly (on human timescale) give them theory and engineering plans from distant future (of a counterfactual human civilization without AGIs). Or alternatively superintelligence might be feasible without doing such work on human level, getting there faster.
In that situation, I wouldn’t rule out running an AGI on a 2070, it’s just not a human-designed AGI, but one designed either by superintelligence or by an ancient civilization of human-adjacent level AGIs (in the sense of serial depth of their technological culture). You might need 1e29 FLOPs of AI-optimized hardware to make a scaffolded LLM that works as a weirdly disabled AGI barely good enough to invent its way into removing its cognitive limitations. You don’t need that for a human level AGI designed from the position of thorough comprehension of how AGIs work.
So what you are saying here could be equivalent to “limitless optimization is possible”. Meaning that given a particular computational problem, it is possible to find an equivalent algorithm to solve that problem that is thousands of times faster or more. Note how the papers you linked don’t show that, the solution trades off complexity and the number of weights for somewhere in the range of 3-6 times less weights and speed.
You can assume more aggressive optimizations may require more and more tradeoffs and silicon support older GPUs won’t have. (Older GPUs lacked any fp16 or lower matrix support)
This is the usual situation for all engineering, most optimizations come at a cost. Complexity frequently. And there exists an absolute limit. Like if we compare Newxomens first steam engine vs the theoretical limit for a steam engine. First engine was 0.5 percent efficient. Modern engines reach 91 percent. So roughly 2 orders of magnitude were possible. The steam engine from the far future can be at most 100 percent efficient.
For neural networks a sparse neural network has the same time complexity, 0(n^2), as the dense version. So if you think 10 times sparsity will work, and if you think you can optimize further with lower precision math, that’s about 2 orders of magnitude optimization.
How much better can you do? If you need 800 H100s to host a human brain equivalent machine (for the vram) with some optimization, is there really enough room left to find an equivalent function, even from the far future, that will fit on a 2070?
Note this would be from 316,640 int8 tops (for 80 H100s) down to 60 tops. Or 5300 times optimization. For vram since that’s the limiting factor, that would be 8000 times reduction.
I can’t prove that algorithms from the cognitive science of the far future can’t work under these constraints. Seems unlikely, probably there are minimum computational equivalents that can implement a human grade robotics model that needs either more compute and memory or specialized hardware that does less work.
Even 800 H100s may be unable to run a human grade robotics model due to latency, you may require specialized hardware for this.
What can we the humans do about this? “Just stop building amazing AI tools because AI will find optimizations from the far future and infect all the computers” isn’t a very convincing argument without demoing it.
AGI is not a specific computation that needs to be optimized to run faster without functional change, it’s a vaguely defined level of competence. There are undoubtedly multiple fundamentally different ways of building AGI that won’t be naturally obtained from each other by incrementally optimizing performance, you’d need more theory to even find them.
So what’s I’m saying is that there is a bounded but still large gap between how we manage to build AGI for the first time while in a rush without a theoretical foundation that describes how to do it properly, and how an ancient civilization of at least somewhat smarter-than-human AGIs can do it while thinking for subjective human-equivalent centuries of serial time about both the general theory and the particular problem of creating a bespoke AGI for RTX 2070.
There can’t be modern papers that describe specifically how the deep technological culture invented by AGIs does such things, so of course the papers I listed are not about that, they are relevant to the thing below the large gap, of how humans might build the first AGIs.
Yes. I’m betting humans won’t come close to it on first try, therefore significant optimization beyond that first try would be possible (including through fundamental change of approach indicated by novel basic theory), creating an overhang ripe for exploitation by algorithmic improvements invented by first AGIs. Also, creating theory and software faster than hardware can be built or modified changes priorities. There are things humans would just build custom hardware for because it’s much cheaper and faster than attempting optimization through theory and software.
We can say that with our theory of conservation of energy, but there is currently no such theory for what it takes to obtain an AGI-worthy level of competence in a system. So I have wide uncertainty about the number of OOMs in possible improvement between the first try and theoretical perfection.
Existential risk is about cognitive competence, not robotics. Subjectively, GPT-4 seems smart enough to be a human level AGI if it was built correctly and could learn. One of the papers I linked is about running something of GPT-4′s scale on a single H200 (possibly only a few instances, since I’m guessing this doesn’t compress activations and a large model has a lot of activations). A GPT-4 shaped model massively overtrained on an outrageous quantity of impossibly high quality synthetic data will be more competent than actual GPT-4, so it can probably be significantly smaller while maintaining similar competence. RAG and LoRA fine-tuning give hopelessly broken and weirdly disabled online learning that can run cheaply. If all this is somehow fixed, which I don’t expect to happen very soon through human effort, it doesn’t seem outlandish for the resulting system to become an AGI (in the sense of being capable of pursuing open ended technological progress and managing its affairs).
Another anchor I like is how 50M parameter models play good Go, being 4 OOMs smaller than 400B LLMs that are broken aproximations of behavior of humans who play Go similarly well. And playing Go is a more well-defined level of competence than being an AGI, probably with fewer opportunities for cleverly sidestepping computational difficulties by doing something different.
Dangerous AGI means all strategically relevant human capabilities which means robotic tool use. It may not be physically possible on a 2070 due to latency. (The latency is from the time to process images, convert to a 3d environment, reason over the environment with a robotics policy, choose next action. Simply cheating with a lidar might be enough to make this work ofc)
With that said I see your point about Go and there is a fairly obvious way to build an “AGI” like system for anything not latency bound that might fit on so little compute. It would need a lot of SSD space and would have specialized cached models for all known human capabilities. Give the “AGI” a test and it loads the capabilities needed to solve the test into memory. It has capabilities at different levels of fidelity and possibly on the fly selects an architecture topology to solve the task.
Since the 2070 has only 8gb of memory and modern SSDs hit several gigabytes a second the load time vs humans wouldn’t be significant.
Note this optimization like all engineering tradeoffs is costing something: in this case huge amounts of disk space. It might need hundreds of terabytes to hold all the models. But maybe there is a compression.
Anyways, do you have any good policy ideas besides centralizing all the latest AI silicon into places where it can be inspected?
The little problem with pauses in this scenario is say you can get AI models from thousands of years in the future today. What else can you get your hands on...
Once there are AGIs and they had some time to figure things out, or once there are ASIs (which don’t necessarily need to figure things out at length to become able to start making relatively perfect moves), it becomes possible to reach into the bag of their technological wisdom and pull out scary stuff that won’t be contained on their original hardware, even if the AIs themselves remain contained. So for a pause to be effective, it needs to prevent existence of such AIs, containing them is ineffective if they are sufficiently free to go and figure out the scary things.
Without a pause, the process of pulling things out of the bag needs to be extremely disciplined, focusing on pivotal processes that would prevent yourself or others from accidentally or intentionally pulling out an apocalypse 6 months later. And hoping that there’s nothing going on that releases the scary things outside of your extremely disciplined process intended to end the acute risk period, because hope is the only thing you have going for you without decades of research that you don’t have because there was no pause.
There are many meanings of “AGI”, the meaning I’m intending in this context is about cognitive competence. The choice to focus on this meaning rather than some other meaning follows from what I expect to pose existential threat. In this sense, “(existentially) dangerous AGI” means the consequences of its cognitive activities might disempower or kill everyone. The activities don’t need to be about personally controlling terminators, as a silly example setting up a company that designs terminators would have similar effects without requiring good latency.
Just a note here : good robotics experts today are “hands on” with the hardware. It gives humans are more grounded understanding of current designs/lets them iterate onto the next one. Good design doesn’t come from just thinking about it, and this would apply to designing humanoid combat robots.
This also is why more grounded current generation experts will strong disagree on the very idea of pulling any technology from thousands of years in the future. Current generation experts will, and do in ai debates, say that you need to build a prototype, test it in the real world, build another based on the information gained, and so on.
This is factually how all current technology was developed. No humans have, to my knowledge, ever done what you described—skipped a technology generation without a prototype or many at scale physical (or software) creations being used by end users. (Historically it has required more and more scale at later stages)
If this limitation still applies to superintelligence—and there are reasons to think it might but I would request a dialogue not a comment deep in a thread few will read—then the concerns you have expressed regarding future superintelligence are not legitimate worries.
If ASIs are in fact limited the same way, the way the world would be different is that each generation of technology developed by ASI would get built and deployed at scale. The human host country would use and test the new technology for a time period. The information gained from actual use is sold back to the ASI owners who then develop the next generation.
This whole iterative process is faster than human tech generation but it still happens in discrete steps and on a large, visible scale, and at human perceptible timescales. Probably weeks per cycle not months to the 1-2 year cycle humans are able to do.
There are reasons it would take weeks and it’s not just human feedback, you need time to deploy a product and you are mining 1 percent or lower edge cases in later development stages.
Yes, physical prototypes being inessential is decidedly not business as usual. Without doing things in the physical world, there need to be models for simulations, things like AlphaFold 2 (which predicts far more than would be possible to experimentally observe directly). You need enough data to define the rules of the physical world, and efficient simulations of what the rules imply for any project you consider. I expect automated theory at sufficiently large scale or superintelligent quality to blur the line between (simulated) experiments and impossibly good one shot engineering.
And the way it would fail would be if the simulations have an unavoidable error factor because real world physics is incomputable or a problem class above np (there’s posts here showing it is). The other way it would fail is if the simulations are missing information, for example in real battles between humanoid terminators the enemy might discover strategies the simulation didn’t model that are effective. So after deploying a lot of humanoid robots, rival terminators start winning the battles and it’s back to the design stage.
If you did it all in simulation I predict the robot would immediately fail and be unable to move. Humanoid robots are interdependent systems.
I am sure you know current protein folding algorithms are not accurate enough to actually use for protein engineering. You have to actually make the protein and test it along with the molecules you want binding logic for, and you will need to adjust your design. If the above is still true for ASI they will be unable to do protein engineering without a wet lab to build and test the parts for every step, where there will be hundreds of failures for every success.
If it does work that way then ASI will be faster than humans—since they can analyze more information at once and learn faster and conservatively try many routes in parallel—but not by the factors you imagine. Maybe 10 times faster not millions. This is because of Amdahls law.
The ASI are also cheaper. So it would be like a world where every technology in every field is being developed at the maximum speed humans could run at times 10.
(Theorizing about ASIs that have no access to physical reality feels noncentral in 2023 when GPT-4 has access to everything and everyone, and integration is only going to get stronger. But for the hypothetical ASI that grew out of an airgapped multimodal 1e29 LLM that has seen all youtube and read all papers and books and the web, I think ability to do good one shot engineering holds.)
(Also, we were discussing an exfiltrated AGI, for why else is RTX 2070 relevant, that happens to lack good latency to control robots. Presumably it doesn’t have the godshatter of technical knowledge, or else it doesn’t really matter that it’s a research-capable AGI. But it now has access to the physical world and can build prototypes. It can build another superintelligence. If it does have a bequest of ASI’s technical knowledge, it can just work to setup unsanctioned datacenters or a distributed network and run an OOMs-more-efficient-than-humanity’s-first-try superintelligence there.)
Predictability is vastly improved by developing the thing you need to predict yourself, especially when you intend to one shot it. Humans don’t do this, because for humans it happens to be much faster and cheaper to build prototypes, we are too slow at thinking useful thoughts. We possibly could succeed a lot more than observed in practice if each prototype was preceded by centuries of simulations and the prototypes were built with insane redundancies.
Simulations get better with data and with better learning algorithms. Looking at how a simulation works, it’s possible to spot issues and improve the simulation, including for the purpose of simulating a particular thing. Serial speed advantage more directly gives theory and general software from distant future (as opposed to engineering designs and experimental data). This includes theory and software for good learning algorithms, those that have much better sample efficiency and interrogate everything about the original 1e29 LLM to learn more of what its weights imply about the physical world. It’s a lot of data, who knows what details can be extracted from it from the position of theoretical and software-technological maturity.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
This is more an expectation based on the rest than an assumption.
These things are not independent.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
If you feel there are further issues to discuss, pm me for a dialogue.