I remember Kokotajlo’s complaint about the optimistic 2027 timeline which just ended too soon. It means that we’d also need to consider that, say, by 2035, either someone will either create a superintelligence, aligned or not, or every human and AI will understand why they shouldn’t do it or can’t do it. What do you think will happen? A demo that ASI is uncreatable or unalignable? A decision to shut down AI research?
Yeah I share a similar intuition. It seems to me that the two steady states are either strong restrictions by some dominant force, or else proliferation of superintelligence into many distinct and diverse entities, all vying for their own interests
I remember Kokotajlo’s complaint about the optimistic 2027 timeline which just ended too soon. It means that we’d also need to consider that, say, by 2035, either someone will either create a superintelligence, aligned or not, or every human and AI will understand why they shouldn’t do it or can’t do it. What do you think will happen? A demo that ASI is uncreatable or unalignable? A decision to shut down AI research?
Yeah I share a similar intuition. It seems to me that the two steady states are either strong restrictions by some dominant force, or else proliferation of superintelligence into many distinct and diverse entities, all vying for their own interests