I already thought the big-lab alignment folks were unserious, unhelpful, and unlikely to speak up in recognition of acute danger. This has, alas, strengthened my convictions. I pray unconfidently that this article is unrepresentative of the quality and tenor of the strategic thinking inside the labs. Also,
(as it is mine to some extent)
is darkly funny. Yes, to some extent your job is to reduce the chance of everything being destroyed… and to some extent, it’s increasing the share value of OpenAI.
I already thought the big-lab alignment folks were unserious, unhelpful, and unlikely to speak up in recognition of acute danger. This has, alas, strengthened my convictions. I pray unconfidently that this article is unrepresentative of the quality and tenor of the strategic thinking inside the labs. Also,
is darkly funny. Yes, to some extent your job is to reduce the chance of everything being destroyed… and to some extent, it’s increasing the share value of OpenAI.
I thank you for the data embodied in this post.