So I assumed a specific relationship between “one unit of human-brain power”, and “super intelligence capable of killing humanity”, where I use human-brain power as a unit but that doesn’t actually have to be linear scaling—imagine this is a graph with two labeled data points, with a point at (human, X:1) and then another point at (SI, X:10B), you can draw many different curves that connect those two labeled points and the Y axis is sort of arbitrary.
Now maybe 10B HBP to kill humanity seems too high, but I assume humanity as a civilization which includes a ton of other compute, AI, and AGI, and I don’t really put much credence in strong nanotech.
So I assumed a specific relationship between “one unit of human-brain power”, and “super intelligence capable of killing humanity”, where I use human-brain power as a unit but that doesn’t actually have to be linear scaling—imagine this is a graph with two labeled data points, with a point at (human, X:1) and then another point at (SI, X:10B), you can draw many different curves that connect those two labeled points and the Y axis is sort of arbitrary.
Now maybe 10B HBP to kill humanity seems too high, but I assume humanity as a civilization which includes a ton of other compute, AI, and AGI, and I don’t really put much credence in strong nanotech.