On the other side, the more you are capable of a leadership position or running your own operation, the more you don’t want to be part of this kind of large structure, the more you worry about being able to stand firm under pressure and advocate for what you believe, or what you want to do can be done on the outside, the more I’d look at other options.
One question is: what if one has strong technical disagreements with the OpenAI approach as currently formulated? Does this automatically imply that one should stay out?
For example, the announcement starts with the key phrase:
We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.
However, one could reason as follows (similarly to what is said near the ending of this post).
A group of unaided well-meaning very smart humans is still not smart enough to steer super-capabilities in such a fashion as to avoid driving off the cliff. So humans would need AI help not only in their efforts to “align, steer, and control AI”, but also in figuring out directions towards which they want to steer.
So, one might decide that a more symmetric formulation might make more sense, that it’s time drop the idea “to steer and control AI systems much smarter than us”, and replace it with something more realistic and probably more desirable, for example, something along the following lines: “to organize a fruitful collaborative ecosystem between humans and AIs, such that it collectively decides where to steer in a fruitful way while guarding against particularly unsafe actions and developments”.
In this formulation, having as much AI assistance as possible from the very beginning sounds much more natural.
And risk factors become not the adversarial “supersmart, potentially very unfriendly AI against humans” (which sounds awfully difficult and almost hopeless), but the “joint ecosystem of humans and AIs against relevant risk factors”, which seems to be more realistic to win.
So, one wonders: if one has this much disagreement with the key starting point of the OpenAI approach, is there any room for collaboration, or does it make more sense to do this elsewhere?
If my alternative action had zero or low value, my inclination would be to interview with OpenAI, be very open and direct about my disagreements and that I was going to remain loud about them, and see if they hired you anyway, and accept if you updated positively during the process about your impact by joining.
I’d be less excited if I had other very good options to consider.
Thanks for writing this!
One question is: what if one has strong technical disagreements with the OpenAI approach as currently formulated? Does this automatically imply that one should stay out?
For example, the announcement starts with the key phrase:
However, one could reason as follows (similarly to what is said near the ending of this post).
A group of unaided well-meaning very smart humans is still not smart enough to steer super-capabilities in such a fashion as to avoid driving off the cliff. So humans would need AI help not only in their efforts to “align, steer, and control AI”, but also in figuring out directions towards which they want to steer.
So, one might decide that a more symmetric formulation might make more sense, that it’s time drop the idea “to steer and control AI systems much smarter than us”, and replace it with something more realistic and probably more desirable, for example, something along the following lines: “to organize a fruitful collaborative ecosystem between humans and AIs, such that it collectively decides where to steer in a fruitful way while guarding against particularly unsafe actions and developments”.
In this formulation, having as much AI assistance as possible from the very beginning sounds much more natural.
And risk factors become not the adversarial “supersmart, potentially very unfriendly AI against humans” (which sounds awfully difficult and almost hopeless), but the “joint ecosystem of humans and AIs against relevant risk factors”, which seems to be more realistic to win.
So, one wonders: if one has this much disagreement with the key starting point of the OpenAI approach, is there any room for collaboration, or does it make more sense to do this elsewhere?
If my alternative action had zero or low value, my inclination would be to interview with OpenAI, be very open and direct about my disagreements and that I was going to remain loud about them, and see if they hired you anyway, and accept if you updated positively during the process about your impact by joining.
I’d be less excited if I had other very good options to consider.
Thanks!