OpenAi has recently released this charter outlining their strategic approach.
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
We are committed to providing public goods that help society navigate the path to AGI. Today this includes publishing most of our AI research, but we expect that safety and security concerns will reduce our traditional publishing in the future, while increasing the importance of sharing safety, policy, and standards research.
My reaction to this was that it sounds like incredibly good and pretty important news. it reads very genuine and distinct from trying to merely appease critics. But I haven’t seen anyone on LW mentioning it, which leaves me wondering if I’m naive.
So I guess I’m just curious about other opinions here. I’m also reminded of this post which seemed reasonable to me at the time.
OpenAi has recently released this charter outlining their strategic approach.
My reaction to this was that it sounds like incredibly good and pretty important news. it reads very genuine and distinct from trying to merely appease critics. But I haven’t seen anyone on LW mentioning it, which leaves me wondering if I’m naive.
So I guess I’m just curious about other opinions here. I’m also reminded of this post which seemed reasonable to me at the time.