I think from an x-risk perspective the relevant threshold is: when AI no longer needs human researchers to improve itself. Currently there is no (publicly known) model which can improve itself fully automatically. The question we need to ask is, when will this thing get out of our control? Today, it still needs us.
I think from an x-risk perspective the relevant threshold is: when AI no longer needs human researchers to improve itself. Currently there is no (publicly known) model which can improve itself fully automatically. The question we need to ask is, when will this thing get out of our control? Today, it still needs us.