Just on the point of MAIM, I would point out that one of the authors of that paper (Alexandr Wang) has seemingly jumped ship from the side of “stop superintelligence being built” [1] to the side of “build superintelligence ASAP”, since he now heads up the somewhat unsubtly named “Meta Superintelligence Labs” as Chief AI officer.
[1]: I mean, as the head of Scale AI (a company that produces AI training data), I’m not sure he was ever on the side of “stop superintelligence from being built”, but he did coauthor the paper apparently.
He definitely works mostly on things he considers safety. I don’t think he has done much capability benchmark work recently (though maybe I am wrong, but I figured I would register that the above didn’t match my current beliefs).
Just on the point of MAIM, I would point out that one of the authors of that paper (Alexandr Wang) has seemingly jumped ship from the side of “stop superintelligence being built” [1] to the side of “build superintelligence ASAP”, since he now heads up the somewhat unsubtly named “Meta Superintelligence Labs” as Chief AI officer.
[1]: I mean, as the head of Scale AI (a company that produces AI training data), I’m not sure he was ever on the side of “stop superintelligence from being built”, but he did coauthor the paper apparently.
Also, Dan Hendrycks works at xAI and makes capability benchmarks.
He definitely works mostly on things he considers safety. I don’t think he has done much capability benchmark work recently (though maybe I am wrong, but I figured I would register that the above didn’t match my current beliefs).
Earlier this year
Oh :/. Thank you for bringing this to my attention!