Should we take this seriously? I’m guessing no, because if this were true someone at OpenAI or DeepMind would have encountered it also and the safety people would have investigated and discovered it and then everyone in the safety community would be freaking out right now.
(This reply isn’t specifically about Karpathy’s hypothesis...)
I’m skeptical about the general reasoning here. I don’t see how we can be confident that OpenAI/DeepMind will encounter a given problem first. Also, it’s not obvious to me that the safety people at OpenAI/DeepMind will be notified about a concerning observation that the capabilities-focused team can explain to themselves with a non-concerning hypothesis.
(This reply isn’t specifically about Karpathy’s hypothesis...)
I’m skeptical about the general reasoning here. I don’t see how we can be confident that OpenAI/DeepMind will encounter a given problem first. Also, it’s not obvious to me that the safety people at OpenAI/DeepMind will be notified about a concerning observation that the capabilities-focused team can explain to themselves with a non-concerning hypothesis.