AI “alignment” seems to have transmuted into “forcing AI to express the correct politics” so subtly and quickly that no one noticed. Does anyone even care about the X-risk thing any more, or is it all now about making sure ChatGPT doesn’t oppose abortion or whatever?
I think most people working on alignment care more about long term risks more than ensuring existing AIs express particular political opinions.
I don’t think your comment is really accurate even as a description of alignment work on ChatGPT. Honesty and helpfulness are still each bigger deals than political correctness.
Essentially all of us on this particular website care about the X-risk side of things, and by far the majority of alignment content on this site is about that.
AI “alignment” seems to have transmuted into “forcing AI to express the correct politics” so subtly and quickly that no one noticed. Does anyone even care about the X-risk thing any more, or is it all now about making sure ChatGPT doesn’t oppose abortion or whatever?
I think most people working on alignment care more about long term risks more than ensuring existing AIs express particular political opinions.
I don’t think your comment is really accurate even as a description of alignment work on ChatGPT. Honesty and helpfulness are still each bigger deals than political correctness.
Essentially all of us on this particular website care about the X-risk side of things, and by far the majority of alignment content on this site is about that.