I’m generally sympathetic to Scott’s positions in this discussion, but I think he is probably very wrong about Ilya.
To the best of my knowledge, Safe Superintelligence has never published a single word about what they plan to do move alignment forward, which is pretty damning. in my opinion.
I have not heard of anyone who is known to be thoughtful about AI safety to have been hired to SSI, and I have not seen any position being advertised to AI safety people. People should correct me if I missed someone good joining SSI, but I think this is also a very bad sign.
My impression is that people who worked with Ilya at OpenAI don’t remember him as being particularly thoughtful about alignment, e.g. much less so than Jan Leike. This is a low confidence, third-hand impression, people can correct me if I’m wrong.
My impression is that the available evidence suggests that Ilya mostly took part in Altman’s firing for (perhaps justified) office politics grievances, and not primarily due to safety concerns. I also think that evidence points to his behavior during and after the incident being kind of cowardly.(I haven’t looked deeply into the details of the battle of the board, and it’s possible I’m wrong on this point, in which case I apologize to Ilya.) I’m also doubtful of how self-sacrificing think actions were—my best guess is that his current net worth is higher (at least on paper) than it would be if he stayed at OpenAI.
I expect that at some point SSI’s investors will grow impatient, and then SSI will start coming out with AI products (perhaps open-source to be cooler), just like everyone else. I don’t expect them to contribute too much to safety, though maybe Ilya will sometimes make some noises about the importance of safety in public speeches, which is nice I guess.
I’m pretty confident in my first two points, much less so in the next two, but I felt someone should respond to Scott on this point. Perhaps @Buck or someone else who expressed skepticism of Ilya’s project can add more information.
I’m generally sympathetic to Scott’s positions in this discussion, but I think he is probably very wrong about Ilya.
To the best of my knowledge, Safe Superintelligence has never published a single word about what they plan to do move alignment forward, which is pretty damning. in my opinion.
I have not heard of anyone who is known to be thoughtful about AI safety to have been hired to SSI, and I have not seen any position being advertised to AI safety people. People should correct me if I missed someone good joining SSI, but I think this is also a very bad sign.
My impression is that people who worked with Ilya at OpenAI don’t remember him as being particularly thoughtful about alignment, e.g. much less so than Jan Leike. This is a low confidence, third-hand impression, people can correct me if I’m wrong.
My impression is that the available evidence suggests that Ilya mostly took part in Altman’s firing for (perhaps justified) office politics grievances, and not primarily due to safety concerns. I also think that evidence points to his behavior during and after the incident being kind of cowardly. (I haven’t looked deeply into the details of the battle of the board, and it’s possible I’m wrong on this point, in which case I apologize to Ilya.) I’m also doubtful of how self-sacrificing think actions were—my best guess is that his current net worth is higher (at least on paper) than it would be if he stayed at OpenAI.
I expect that at some point SSI’s investors will grow impatient, and then SSI will start coming out with AI products (perhaps open-source to be cooler), just like everyone else. I don’t expect them to contribute too much to safety, though maybe Ilya will sometimes make some noises about the importance of safety in public speeches, which is nice I guess.
I’m pretty confident in my first two points, much less so in the next two, but I felt someone should respond to Scott on this point. Perhaps @Buck or someone else who expressed skepticism of Ilya’s project can add more information.