I think an important point to make is that whether a point is red or yellow depends on other considerations than just capabilities vs alignment. For example the current state of hardware progress, how integrated fully autonomous robots are in our factories, society, and warfare and how common AI companions are.
Consider that case that the current (2023) deployed hardware/compute is not X-risk. That is even with the most advanced/efficient algorithms the resulting AGI could not take over (because say the von Neumann bottleneck), but would have to wait until more advanced chip factories where built and started production.
To be clear, if the current hardware/compute does not constitute X-risk, then all the nodes are arguably currently yellow. I appreciate it could be more complicated than this, if you regard it as highly likely that such an AI in the situation described before will convince people to make more sophisticated chip factories or this will happen by default anyway, then you could say the node is actually red, or some probability yellow/red.
I disagree. I think you underestimate how powerful a embodied but highly persuasive and superhumanly strategic and super fast AGI could be. I think we (2023) are already well beyond an X-risk danger threshold for deployed hardware/compute, and that the only thing holding us back from doom is that we haven’t stumbled onto a sufficiently advanced algorithm.
I think an important point to make is that whether a point is red or yellow depends on other considerations than just capabilities vs alignment. For example the current state of hardware progress, how integrated fully autonomous robots are in our factories, society, and warfare and how common AI companions are.
Consider that case that the current (2023) deployed hardware/compute is not X-risk. That is even with the most advanced/efficient algorithms the resulting AGI could not take over (because say the von Neumann bottleneck), but would have to wait until more advanced chip factories where built and started production.
To be clear, if the current hardware/compute does not constitute X-risk, then all the nodes are arguably currently yellow. I appreciate it could be more complicated than this, if you regard it as highly likely that such an AI in the situation described before will convince people to make more sophisticated chip factories or this will happen by default anyway, then you could say the node is actually red, or some probability yellow/red.
I disagree. I think you underestimate how powerful a embodied but highly persuasive and superhumanly strategic and super fast AGI could be. I think we (2023) are already well beyond an X-risk danger threshold for deployed hardware/compute, and that the only thing holding us back from doom is that we haven’t stumbled onto a sufficiently advanced algorithm.