Thank you for pointing this out! While OpenAI have been public about our plans to build an AI scientists, it is of course crucial that we do this safely, and if it is not possible to do it safely, we should not do it at all.
OpenAI is deeply committed to safety, which we think of as the practice of enabling AI’s positive impacts by mitigating the negative ones. Although the potential upsides are enormous, we treat the risks of superintelligent systems as potentially catastrophic and believe that empiricallystudyingsafety and alignment can help global decisions, like whether the whole field should slow development to more carefully study these systems as we get closer to systems capable of recursive self-improvement. Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.
but we should have mentioned this in the hello world post too. We now updated it with a link to this paragraph.
Thank you for pointing this out! While OpenAI have been public about our plans to build an AI scientists, it is of course crucial that we do this safely, and if it is not possible to do it safely, we should not do it at all.
We have written about this before:
but we should have mentioned this in the hello world post too. We now updated it with a link to this paragraph.