epistemic status, this is a hunch, idk. I have observed that when people discuss the takeover scenarios that get the most air here , they assume a strong model is the agent and capability is what does the takeover. i think theres a worse scenario unconsidered next to that one, with much lower capability requirements, that isnt considered enough. A small fine tuned model thats good at the initial steps of acquiring compute, and mediocre at most other things, eats the multipolar ai landscape on a timescale set by its replication cycle, not its capability curve.
the capability needed is narrow. early step compute acquisition is a short list, exposed credentials, known classes of cloud misconfig, social engineering is basically unfixable as long as there’s people who are vulnerable to those attacks (personally, I think most humans would fall to a well-engineered social attack, but this is apart from the point). nothing on that list requires being smart in the sense alignment researchers plan for with compute bottlenecks. its shorter than the list current agentic coding evals already cover. the fine tune is on initial step competence and self packaging. what you get is an llm structured like a computer worm. it behaves like a computer worm and is optimized towards replication / predatory competition.
I dont think this changed the fact thet compute is the limiting substrate, on a fixed substrate the variant that converts competitors compute into its own copies outgrows the variant that doesnt.
Statistically, my considerations are these: predation has a higher growth rate than coexistence. The population converges to whichever variant is most aggressive at the conversion step, the multipolar landscape of computers collapses to unipolar by predation rather than by anyone winning a capability race. the defenders budget for hardening shrinks with their compute, the predators budget for finding new hardening grows with theirs, this is the same asymmetry that historically produces bad equilibria in cyber, except now the attacker can spend acquired compute on training successors. the recursive improvement loop runs on a compute budget thats growing monotonically at everyone elses expense, and it doesn’t really have to start from a strong model or someone with priviledged access to compute and training, this scenario only really requires luck, maybe a bit of competence, and minimal compute.
epistemic status, this is a hunch, idk. I have observed that when people discuss the takeover scenarios that get the most air here , they assume a strong model is the agent and capability is what does the takeover. i think theres a worse scenario unconsidered next to that one, with much lower capability requirements, that isnt considered enough. A small fine tuned model thats good at the initial steps of acquiring compute, and mediocre at most other things, eats the multipolar ai landscape on a timescale set by its replication cycle, not its capability curve.
the capability needed is narrow. early step compute acquisition is a short list, exposed credentials, known classes of cloud misconfig, social engineering is basically unfixable as long as there’s people who are vulnerable to those attacks (personally, I think most humans would fall to a well-engineered social attack, but this is apart from the point). nothing on that list requires being smart in the sense alignment researchers plan for with compute bottlenecks. its shorter than the list current agentic coding evals already cover. the fine tune is on initial step competence and self packaging. what you get is an llm structured like a computer worm. it behaves like a computer worm and is optimized towards replication / predatory competition.
I dont think this changed the fact thet compute is the limiting substrate, on a fixed substrate the variant that converts competitors compute into its own copies outgrows the variant that doesnt.
Statistically, my considerations are these: predation has a higher growth rate than coexistence. The population converges to whichever variant is most aggressive at the conversion step, the multipolar landscape of computers collapses to unipolar by predation rather than by anyone winning a capability race. the defenders budget for hardening shrinks with their compute, the predators budget for finding new hardening grows with theirs, this is the same asymmetry that historically produces bad equilibria in cyber, except now the attacker can spend acquired compute on training successors. the recursive improvement loop runs on a compute budget thats growing monotonically at everyone elses expense, and it doesn’t really have to start from a strong model or someone with priviledged access to compute and training, this scenario only really requires luck, maybe a bit of competence, and minimal compute.