Strong downvote because the question feels a bit targeted / leading. May be OpenAI is decreasing AI xrisk. May be other organizations are also engaged in similar behaviors that increase AI xrisk. I think a better approach would be to break things down into:
1) What factors affect AI xrisk? (Or, since that’s pretty broad, have specific questions like “Does X affect AI xrisk?”) (E.g. “How does pushing the state of the art of capability research affect AI xrisk?”)
2) Have specific questions about OpenAI actions / traits that can be relatively easily grounded. (E.g. “How would you rate the quality of the OpenAI safety team?”)
It’s easy enough for people to put #1 and #2 together, but it has the added benefit of people using answering #1 without targeting any specific company. Plus answers from #1 apply to other organizations.
Thanks for explaining your downvote! I agree that the question is targeted. I tried to also give arguments against this idea of OpenAI increasing xrisks, but it probably still reads as biased.
That being said, I disagree about not targetting OpenAI. Everything that I’ve seen discussed by friends is centered completely about OpenAI. I think it would be great to have an answer showing that OpenAI is only the most visible group acting that way, but that others follow the same template. It’s still true that the question is raised way more about OpenAI than any other research group.
Strong downvote because the question feels a bit targeted / leading. May be OpenAI is decreasing AI xrisk. May be other organizations are also engaged in similar behaviors that increase AI xrisk. I think a better approach would be to break things down into:
1) What factors affect AI xrisk? (Or, since that’s pretty broad, have specific questions like “Does X affect AI xrisk?”) (E.g. “How does pushing the state of the art of capability research affect AI xrisk?”)
2) Have specific questions about OpenAI actions / traits that can be relatively easily grounded. (E.g. “How would you rate the quality of the OpenAI safety team?”)
It’s easy enough for people to put #1 and #2 together, but it has the added benefit of people using answering #1 without targeting any specific company. Plus answers from #1 apply to other organizations.
Thanks for explaining your downvote! I agree that the question is targeted. I tried to also give arguments against this idea of OpenAI increasing xrisks, but it probably still reads as biased.
That being said, I disagree about not targetting OpenAI. Everything that I’ve seen discussed by friends is centered completely about OpenAI. I think it would be great to have an answer showing that OpenAI is only the most visible group acting that way, but that others follow the same template. It’s still true that the question is raised way more about OpenAI than any other research group.