Thoughts on Hardware limits to Prevent AGI?

Summary: An individual Commodore 64 is almost certainly safe, Top 10 super computers could almost certainly run a superpowerful AGI, but where is the safe line, and how would we get to the safe side?

I started thinking about this topic when I realized that we can safely use uranium because we have a field of nuclear criticality safety[1] but we have no field of computer foom safety (or Artificial General Intelligence takeoff safety).[2] For example, if we had such a field we might be able to have a function AGIT(architecture, time, flops, memory) Bool to tell us if a computer could take off into an AGI or not with that amount of resources. Making this a total function (giving a value for all of its domain) might not be possible, but even a partial function could be useful. Note that by computer foom safety my worry is that an AI project will result in a powerful AGI that is neither controllable nor ethical and either results in a world substantially worse than humans would create on our own or results in humanity dying. The three main failure modes I worry about are 1, the AGI’s utility function does not care about preventing sentient being deaths, 2, the AGI uses up substantial portions of the resources in the universe, and 3, the AGI does not get consent before “helping” sentient beings.[3] Note that an alternative way to preventing AGI by restricting hardware (an engineering control) is restricting AI programs from running on computers (an administrative control).

Alien Computer Instructions

A possible scenario where we would actually want to know what computers were provably safe is the following science fiction introduction:

Astronomers sighted a incoming interstellar object as it enters the solar system. Humans manage to send out a probe to fly by it, and discover it is artificial. The delta v required to catch up and then match velocities is horrendous, but humans manage to put together a second robot probe to intercept it. The probe intercepts the interstellar object and we discover that the object had been subject to both a strong electromagnetic pulse that fried any electronics, and a thermal shock (laser possibly) that also damaged the interstellar object. In examining the inside, the probe discoverers glass etched with pulses,[4] which after some creative engineering and improvising, the probe manages to read the data and transmit it to Earth.

After some work to decode it (it was deliberately made easy to decode however), it is discovered that the data describe how to make machines, mostly computers (and tools to make computers) starting with a mechanical difference engine,[5] relay based 16 word 36 bit computer with paper tape readers and writers, a somewhat bigger 4 KiB diode/​magnetic logic computer,[6] a 64 KiB transistor computer,[7] and a 100 TeraFLOP, 16 Terabyte integrated circuit super computer.[8] There also are various input/​output devices including a robot arm to attach to the computers. As well, programs are also included for the computers, and virtual machine descriptions for the computers are also included.

The dilemma humanity has is should we build the any of the machines, and should we run any of the programs? It seems likely that if we do not build them, nothing will happen. The damage to the interstellar probe seems to indicate that someone did not want this to succeed.

Building a machine specified by an advanced alien can be dangerous, since it might have hidden capabilities.[9] The various programs provided have CPU and memory minimum requirements so they could also be run in virtual machines. How powerful of a computer are we willing to provide an unknown program?

I am guessing that 64 KiB of RISC-V RV64GCV machine language code would be more than sufficient to include a transformer model training and running program, and a simple simulation of Feynman’s classical physics[10] formulation. It probably could fit the standard model and general relativity instead. So a small program could easily include enough to get to near AGI and a basic understanding of the universe in 64 KiB of code if run on a large and fast enough computer. I suspect that an unsafe AGI could be done in a similar amount of code to a transformer model.

So, in the above scenario, is there any sufficiently small and slow computer that we might actually feel at least somewhat safe in running the programs? Note that unlike the Halting Problem or Rice’s theorem which are dealing with Turing machines with an infinite tape, we are dealing with machines with finite memory, so there are things that are provable that would not be with a Turing machine.

Provable Safe and Unsafe Computers?

I have tried to figure out what the threshold for computing power for a super-intelligent artificial general intelligence (AGI) is.[11]

Proving that an AGI can’t be smart enough to escape is tricky. There are three basic ways I can think of that an AGI could use to escape. They are manipulating humans, manipulating the environment, or manipulating other computer infrastructure. Manipulating other computer infrastructure is already something that has been done by computer virus for decades, and can gain other resources which can be used for one of the other breakout methods. Manipulating humans probably requires at least some level of fluency in language. Manipulating the environment requires both some knowledge of the environment and some ability to simulate it. As George Box has said “All models are wrong; some models are useful″ so the trick is figuring out if the model’s approximations are too great to make the model not useful. The more computing power available, the less approximations that are needed. So one way to prove an AGI can’t escape is to prove that the available computing power does not allow fluent language and does not allow sufficient simulations.

On the low end, I am fairly certain that a Commodore 64 (25 kFLOPS, 64 KiB) cannot run an AGI. A typical human vocabulary of 20,000 words would not fit in memory without compression[12] and adding definitions would increase the size even more so it would not be possible to fit a program that used unrestricted language fluently. Another example is a molecular simulation that used 4 bytes per float, 3 vectors of 3 floats, and a beginning and end state could not even fit a simulation of a 10x10x10 cube in 64 KiB. So the computer can do neither fluent language nor complicated simulations. I have a hard time imagining any way that any program on a C64 could be a dangerous AGI.[13]

Note, that you might be able to network enough C64s to achieve AGI, and it might be possible to create some kind of radio in the circuits or other trickyness, so I am not sure that it would be safe to run any program on it without taking precautions like Faraday cages etc. A cassette used for storage where the human has to press play or record wouldn’t really change this analysis, but a computer controlled floppy disk or hard drive would effectively be more, somewhat slow memory, and so could also allow more capabilities. So far as simulation, there is nothing magical about 64 KiB, 16 KiB is more restrictive, and there are plenty of simulations that cannot be done in 512 KiB.[14] It seems like it should be possible to prove[15] that a 512 KiB computer also cannot do fluent human language. The above applies to reasonably normal machine languages, if your CPU circuitry includes a full LLM or other AI/​AGI tools then it probably can do a lot with only 64 KiB. Write Once Read Many (WORM) drives and media such as paper tape, CD-R, DVD-R, Ultraviolet Erasable Programmable ROM are safer than equivalent capacity drives and media that can be written many times such as magnetic tape, CD-RW, DVD-RW, Electrically Erasable Programmable ROM. Lastly, the amount of memory available in registers (such as vector registers) also needs to be included.

On the high end, I am fairly certain that any of the top 10 super computers could run a super-intelligent AGI. I base that on estimates that a human brain could be emulated with about 20 petaflops (),[16] so if you have 60 petaflops or more, you could run more efficient algorithms (human brains can’t just rewire themselves quickly to dedicate more neurons to the current computation) to end up being much more intelligent than a human.[17]

So with high certainty we could prevent accidentally creating a rogue AGI if we all switched to non-networked Commodore 64s. (requiring a 2.4e12 safety margin might seem excessive, but I am not sure how to reduce it. Better theory on AGI takeoff might be able to reduce the gap.)

How to improve the knowledge on AGI takeoff hardware limits are a challenge. The obvious way of creating an AGI and watching ver takeoff is rather dangerous. As for trying to figure out safe levels of hardware, I think that there are some safer problems. One example is testing lower levels of hardware for a game like Chess or Go and seeing how low of memory and computation power we can go and still play at an expert level. Another way is to test the minimum levels needed for a universal computer constructor in 2d cellular automaton like Conway’s Life or Codd’s 8 state version. A universal computer constructor (UCC) is a machine that can both run arbitrary computer programs and can construct a new copy of itself in the environment it is in. In Codd’s cellular automaton this can be done with a 94,794 cells.[18] This is a rather low size and doesn’t allow proving much, but the more interesting question is how much computational power is needed for the computer to design a UCC in a new 2d cellular automaton environment. This could be tested by giving specification for the cellular environment at run time and requiring the code to create a running UCC. Of course, experimental knowledge can only provide so much information, a working theory is needed for more exact knowledge.

Probably Safe and Probably Dangerous Computers

Now a somewhat different question than what is provably safe and what is highly likely to be dangerous is what is probably safe if humans are messing around without the understanding to create a provably safe AGI. I think a Cray-1 (a 1975 super computer with 8 MiB of RAM and 160 MFLOPS) is reasonably safe. Basically, we have had this computer around for nearly half a century, and we have not created AGI with it. Late 1990s desktop computers also had this amount of computing power, so practically any programmer who wanted this amount of power this millennium has had it. As well, the brain of a fruit fly has about 100 thousand neurons and about 50 million chemical synapses,[19] which in some sense has more computing power and similar memory compared to a Cray-1 (each synapse can fire multiple times per second), so evolution has not managed to create a general intelligence with this level of computing power either. So I suspect that 8 MiB 160 MFLOP computers are reasonably safe.

On the other direction, I think that IBM’s Watson computer (80 TeraFLOPs (), 16 TiB in 2011) probably could run a super-intelligent AGI. LaMDA for example was trained using 123 TFLOPS for 57.7 days[20] so an 80 TeraFLOP computer could have done the training in under a year. I suspect that LaMDA is close enough to an AGI[21] (probably missing only better training and architecture) that this amount of computing power probably needs to be considered dangerous right now. A single GeForce RTX 4090 has about 73 TeraFLOPS, so this level of computing power is widely available (The memory is a bit more of a limit, since a Geforce RTX 4090 only has 24 GB of RAM, so you would need 23 to fit the parameters from LaMDA, more if you are training).[22]

In between is a RaspberryPi 4B, with 4 GiB of Ram and about 13.5 GFLOPS and it can run some large language models.[23] I am not sure if a RaspberryPi goes more with the safe side or the dangerous side. However, if RaspberryPI level computers’ are cheaply available, it would be possible to combine thousands of them to become a Watson level computer.

Getting to the Safe Side

If the goal is to get from where we are today, to a world where the computing power is below some limit, there are lots of challenges. A total immediate ban would throw the world into chaos, so the ban would probably have to be phased in, to give people time to adapt.

One major challenge is that one way to exceed any safe limit is to use below the limit computers to build a cluster above the limit, which means that if we want to avoid reaching some believed to be maximum safe limit, we actually need to set the administrative limit well below, based on how many computers we think can be clustered. I suspect that this requires at least a factor of a thousand safety limit.

Shutting down large GPU clusters as Eliezer Yudkowsky has suggested is
a good first step.[24] I don’t think banning only GPUs would be sufficient, because the
computing power needed can be created with clusters of CPUs.

I think what is needed is to stop producing new powerful computer chips, and remove the ones that exist from the world. Preventing the production of new high powered computer chips is probably the easier part, since the production equipment (like ultraviolet or x-ray lithography equipment such as aligners) is fairly specialized. Getting rid of all the existing powerful computers might be hard and might just result in a black market. If you wanted to ban computers with more than 64KiB of RAM would be helped by banning integrated circuits.[25] Desktop C64 level computers can be made with roughly 10 µm feature size lithography,[26] Cray-1 level desktop computers can be made with roughly 0.35 µm lithography.[27]

Safe Computer Conclusions

  • Commodore 64 (64 KiB, 25 kFLOPS) Almost certainly safe individually.

  • Cray 1 (8 MiB, 160 MFLOPS) Probably safe from accidental creating an AGI.

  • RaspberryPi 4B (4 GiB, 13.5 GFLOPS) Unknown, but clusters of 1000s of them are probably dangerous with current or near term AI techniques.

  • Watson (16 TiB, 80 TFLOPS) Probably dangerous with current or near term AI techniques.

  • Top 10 supercomputer (1000 TiB, 60 PFLOPS) Almost certainly dangerous.

You may be wondering about the fact that we have had computers powerful enough to make an AGI for over a decade, and it hasn’t happened. I think first of all, we have learned more about AI in the past decade. Also survivorship bias means we are only sitting here talking about this on planets or quantum branches where we are not dead.

I do think that there is usefulness in limited bans such as pausing training runs or eliminating GPU clusters. First of all, the relevant metaphor is if you are in a hole, stop digging. Secondly, there is some level of AGI that is roughly equivalent to a human. The more computing power available, the more likely the AGI is vastly above this level. Put the same program on a Cray-1 and Watson, and the latter will be approximately a million times smarter.

If people are going to run AI programs on supercomputers, then I think supercomputers need to be restricted to be substantially less powerful than Watson, which also likely means restricting desktop computers to substantially less powerful than Raspberry PI 4Bs.

All that said, any effective ban would be a hard choice, since it would require humans to stop using a widely available technology that is quite useful. As well, there are other risks to humans (climate change for example), and computing power is useful for staying safe from them.

Lastly, I have certainly made mistakes in this, and if we want to not have AGI spontaneously develop from an AI project, we need a better field of AGI takeoff safety including hardware safety limits. As for me personally, if I had the choice between my current 1.6 GHz 4 core CPU with 24 GB of RAM computer that I am typing on, versus living in a world where we had eliminated existential risk from things like uncontrolled AGI and nuclear bombs, I would gladly trade my computer in for a 512 KB, 8 MHz computer[28] with a floppy drive and a CD-R and an modem level network connection[29] if that is what we all need to do.

These are my own opinions and not those of my employer. This
document may be distributed verbatim in any media.

  1. ^

    There are multiple books on this, and a wikipedia article:
    https://​​en.wikipedia.org/​​wiki/​​Nuclear_criticality_safety

  2. ^

    In the unlikely event that someone on LessWrong has not heard of the problems with AGI, my two recommended introductions to this are “The basic reasons I expect AGI ruin” by Rob Bensinger: https://​​intelligence.org/​​2023/​​04/​​21/​​the-basic-reasons-i-expect-agi-ruin/​​ and “If We Succeed” by Stuart Russell: https://​​direct.mit.edu/​​daed/​​article/​​151/​​2/​​43/​​110605/​​If-We-Succeed

  3. ^

    I think it is an interesting question what the probability of deadly, restrictive and good outcomes for AGI are, but I expect that the probability of deadly or restrictive outcomes is high. Also, I expect that an AGI will prevent powerful computers from being built because they are a danger to both the AGI and everything in the universe if a hostile AGI is created on the powerful computer. Some of the ways an AGI could accomplish this are deadly to humans.

  4. ^
  5. ^

    Charles Babbage’s Difference Engine No. 2 Technical Description https://​​ed-thelen.org/​​bab/​​DE2TechDescn1996.pdf

  6. ^

    This is a technology that never was really used because we invented transistors soon after but can be read about in Digital Applications of Magnetic Devices by Albert J. Meyerhoff https://​​archive.org/​​details/​​digital_applications_of_magnetic_devices

  7. ^

    This would be similar to a PDP-11/​20

  8. ^

    These are example computers that can be constructed with just machine tools, simple semiconductor-less electric use, diodes, transistors, and finally integrated circuits.

  9. ^

    From Eliezer Yudkowsky “AGI Ruin: A List of Lethalities″ https://​​intelligence.org/​​2022/​​06/​​10/​​agi-ruin/​​: “What makes an air conditioner ‘magic’ from the perspective of say the thirteenth century, is that even if you correctly show them the design of the air conditioner in advance, they won’t be able to understand from seeing that design why the air comes out cold; the design is exploiting regularities of the environment, rules of the world, laws of physics, that they don’t know about.”

  10. ^

    Basically, Richard Feynman’s classical physics formulation (appearing in the Feynman Lectures, Volume 2, Table 18-4) is Maxwell’s Equations, Lorentz Force and Newtonian Gravitation as well as Conservation of Charge:

    and the Law of Motion:

  11. ^

    One prior guess I have seen is Eliezer Yudkowsky suggested that human level AGI could be done on a 286 (if the programmer is a superintelligent AGI) or a “home computer from 1995” (maybe a 90 Mhz Pentium, if the programmer is a human) https://​​intelligence.org/​​2022/​​03/​​01/​​ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/​​

  12. ^

    https://​​www.mit.edu/​​~ecprice/​​wordlist.10000 for example is 75880 bytes. As well word vectors usually have vector length of at least 100, so those would not even fit a 1000 basic words with the vectors. See for example GloVe: “Global Vectors for Word Representation″ https://​​aclanthology.org/​​D14-1162/​​ for discussion on word vector size.

  13. ^

    So basically, I think it is highly likely that AGIT(Risc-V 64G or similar, x, 25 kFLOPS, 64 KiB) = False for all x.

  14. ^

    I very much doubt it is possible to simulate enough of reality in 4 KiB to break out, I think it highly likely that it is possible to simulate enough of reality in 1000 TiB of memory to break out, so the more memory available, the higher the probability of being able to break out.

  15. ^

    I wonder if the theory in Superexponential Conceptspace, and Simple Words might be useful for this. From the other end, 512 KiB of memory would not fit the typical vectors used for word representation for a decent sized vocabulary (If the vector length is 100, then even if the vector item size is one byte that only gets about 5000 words). Existing language models are in the gigabyte range. The SHRDLU program would fit in 512 KiB (it used 100 to 140K of 36-bit words of memory) but only supported less than 500 words (from counting the DEFS in dictio in the source code) and only could talk about blocks. So it seems unlikely that 512 KiB of memory could support fluent language.

  16. ^

    Wikipedia lists this and cites Ray Kurtzweil. Note that until we have actually done this, this is a bit of a conjecture. https://​​en.wikipedia.org/​​wiki/​​Computer_performance_by_orders_of_magnitude Ray Kurtzweil in “The Age of Spiritual Machines″, page 103, gives the following calculation: 100 trillion connections * 200 calculations per second = calculations per second, and he comments that this might be a high estimate.

  17. ^

    So basically, I think it is likely that AGIT(Top 10 computer in 2023, 1 year, 60 petaflops, 1000 TiB) = True.

  18. ^
  19. ^
  20. ^

    LaMDA: Language Models for Dialog Applications, section 10 https://​​arxiv.org/​​abs/​​2201.08239

  21. ^

    Basically, LLMs are showing signs of general intelligence. Examples of an evaluation of GPT-4 are listed in “Sparks of Artificial General Intelligence: Early experiments with GPT-4″ https://​​arxiv.org/​​abs/​​2303.12712

  22. ^

    LaMDA’s largest model has 137 billion parameters, 137 G*4 B/​24 GB = 22.8, assuming 32 bit floats, but lower precision could probably be used.

  23. ^
  24. ^

    Eliezer Yudkowsky has suggested shutting down large GPU clusters and then keep lowering the limit in several places, most notably in:
    https://​​intelligence.org/​​2023/​​04/​​07/​​pausing-ai-developments-isnt-enough-we-need-to-shut-it-all-down/​​

  25. ^

    The IBM 360 Model 50 for example could have up to 128 KiB of RAM and it used magnetic core memory.

  26. ^

    The 6502 was originally fabricated with 8 µm, but by scaling it could be made with 10 µm feature for about 50% more power consumption () which could probably be regained by switching to CMOS

  27. ^

    By rough Dennard scaling, going from 10 µm to 0.35 µm gives you a increase in computing power, and the Pentium Pro which used 0.35 µm did have comparable floating point performance to a Cray-1.

  28. ^

    Similar computers include an Atari 520ST, a Macintosh 512K or IBM XT 286. This is more than adequate to run a C compiler, MicroPython, and basic word processing and spreadsheets (as well as NES level games).

  29. ^

    A computer and connection like that could definitely do text based email, irc and of course, the LessWrong BBS.