If you’re wondering why OAers are suddenly weirdly, almost euphorically, optimistic on Twitter
For clarity, which OAers this is talking about, precisely? There’s a cluster of guys – e. g. this, this, this – claiming to be OpenAI insiders. That cluster went absolutely bananas the last few days, claiming ASI achieved internally/will be in a few weeks, alluding to an unexpected breakthrough that has OpenAI researchers themselves scared. But none of them, as far as I can tell, have any proof that they’re OpenAI insiders.
On the contrary: the Satoshi guy straight-up suggests he’s allowed to be an insider shitposting classified stuff on Twitter because he has “dirt on several top employees”, which, no. From that, I conclude that the whole cluster is a member of the same species as the cryptocurrency hivemind hyping up shitcoins.
Meanwhile, any actual confirmed OpenAI employees are either staying silent, or carefully deflate the hype. roon is being roon, but no more than is usual for them, as far as I can tell.
So… who are those OAers that are being euphorically optimistic on Twitter, and are they actually OAers? Anyone knows? (I don’t think a scenario where low-level OpenAI people are allowed to truthfully leak this stuff on Twitter, but only if it’s plausible-deniable, makes much sense.[1] In particular: what otherwise unexplainable observation are we trying to explain using this highly complicated hypothesis? How is that hypothesis privileged over “attention-seeking roon copycats”?)
Suppose that OpenAI is following this strategy in order to have their cake and eat it too: engage in a plausible-deniable messaging pattern, letting their enemies dismiss it as hype (and so not worry about OpenAI and AI capability progress) while letting their allies believe it (and so keep supporting/investing in them). But then either (1) the stuff these people are now leaking won’t come true, disappointing the allies, or (2) this stuff will come true, and their enemies would know to take these leaks seriously the next time.
This is a one-time-use strategy. At this point, either (1) allow actual OpenAI employees to leak this stuff, if you’re fine with this type of leak, or (2) instruct the hype men to completely make stuff up, because if you expect your followers not to double-check which predictions came true, you don’t need to care about the truth value at all.
For clarity, which OAers this is talking about, precisely? There’s a cluster of guys – e. g. this, this, this – claiming to be OpenAI insiders. That cluster went absolutely bananas the last few days, claiming ASI achieved internally/will be in a few weeks, alluding to an unexpected breakthrough that has OpenAI researchers themselves scared. But none of them, as far as I can tell, have any proof that they’re OpenAI insiders.
On the contrary: the Satoshi guy straight-up suggests he’s allowed to be an insider shitposting classified stuff on Twitter because he has “dirt on several top employees”, which, no. From that, I conclude that the whole cluster is a member of the same species as the cryptocurrency hivemind hyping up shitcoins.
Meanwhile, any actual confirmed OpenAI employees are either staying silent, or carefully deflate the hype. roon is being roon, but no more than is usual for them, as far as I can tell.
So… who are those OAers that are being euphorically optimistic on Twitter, and are they actually OAers? Anyone knows? (I don’t think a scenario where low-level OpenAI people are allowed to truthfully leak this stuff on Twitter, but only if it’s plausible-deniable, makes much sense.[1] In particular: what otherwise unexplainable observation are we trying to explain using this highly complicated hypothesis? How is that hypothesis privileged over “attention-seeking roon copycats”?)
General question, not just aimed at Gwern.
Edit: There’s also the Axios article. But Axios is partnered with OpenAI, and if you go Bounded Distrust on it, it’s clear how misleading it is.
Suppose that OpenAI is following this strategy in order to have their cake and eat it too: engage in a plausible-deniable messaging pattern, letting their enemies dismiss it as hype (and so not worry about OpenAI and AI capability progress) while letting their allies believe it (and so keep supporting/investing in them). But then either (1) the stuff these people are now leaking won’t come true, disappointing the allies, or (2) this stuff will come true, and their enemies would know to take these leaks seriously the next time.
This is a one-time-use strategy. At this point, either (1) allow actual OpenAI employees to leak this stuff, if you’re fine with this type of leak, or (2) instruct the hype men to completely make stuff up, because if you expect your followers not to double-check which predictions came true, you don’t need to care about the truth value at all.