every year of hardware increase without foom also raises the bar for foom by increasing the (AI assisted) capabilities of our human/machine civilization
I think it is worth highlighting that this is an assumption, not a necessary fact about how civilization works. To put it into a nonsense-math:
Assumption: Suppose that the current (technology assisted) capability of an average human is X, and that forming a singleton would require capability X+C. Then if the technology-assisted capability of average human increases to Y, forming a singleton would require capability ≥Y+C.
I certainly don’t mean to say that this is definitely not true. However, I think it is far from obvious. In practice, I expect some technologies to make takeover harder, and others to make it easier, with the overall trend being unclear. (I would bet on overall making it harder, but with very high uncertainty.) Some reasons for the non-obviousness:
The adoption of capabilities will be uneven. (EG, Google might increase their cybersecurity, but the same might not be for Backwater State University. I might keep up-to-date with AI being able to do impersonation scams, but my grandma won’t.)
Some takeover strategies might only require taking control of some percentage of population (infrastructure, resources, …) rather than the most capable/well-defended population (infractructure, resources, …). To give a non-takeover-strategy example: manipulating presidential election works like this. [Don’t mistake “I can’t identify any strategy like this” with “there is no such strategy”.]
I expect that as we get more technology, the world will naturally grow more robust against exploits that people actually try, or expect others to try. However, most people are not psychopaths (or, even worse, fanatic psychopaths). So some of the vulnerabilities might remain unfixed.
Some technologies that make us more capable also make us more vulnerable. EG, everybody having their personal AutoGPT, with access to all their passwords, that automatically uses the latest LLM definitely increases our capabilities. But it also creates a single point of failure.
I think it is worth highlighting that this is an assumption, not a necessary fact about how civilization works. To put it into a nonsense-math:
Assumption: Suppose that the current (technology assisted) capability of an average human is X, and that forming a singleton would require capability X+C. Then if the technology-assisted capability of average human increases to Y, forming a singleton would require capability ≥Y+C.
I certainly don’t mean to say that this is definitely not true. However, I think it is far from obvious. In practice, I expect some technologies to make takeover harder, and others to make it easier, with the overall trend being unclear. (I would bet on overall making it harder, but with very high uncertainty.) Some reasons for the non-obviousness:
The adoption of capabilities will be uneven. (EG, Google might increase their cybersecurity, but the same might not be for Backwater State University. I might keep up-to-date with AI being able to do impersonation scams, but my grandma won’t.)
Some takeover strategies might only require taking control of some percentage of population (infrastructure, resources, …) rather than the most capable/well-defended population (infractructure, resources, …). To give a non-takeover-strategy example: manipulating presidential election works like this. [Don’t mistake “I can’t identify any strategy like this” with “there is no such strategy”.]
I expect that as we get more technology, the world will naturally grow more robust against exploits that people actually try, or expect others to try. However, most people are not psychopaths (or, even worse, fanatic psychopaths). So some of the vulnerabilities might remain unfixed.
Some technologies that make us more capable also make us more vulnerable. EG, everybody having their personal AutoGPT, with access to all their passwords, that automatically uses the latest LLM definitely increases our capabilities. But it also creates a single point of failure.