After reading some of this reddit thread I think I have a better picture of how people are reacting to these events. I will probably edit or follow up on this post to follow up.
My high level takeaway is:
people are afraid to engage in speech that will be interpreted as political, so are saying nothing.
nobody is actually making statements about my model of alignment deployment, possibly nobody is even thinking about it.
In the edit or possibly in a separate followup post I will try to present the model at a further disconnect from the specific events and actors involved, which I am only interested in as inputs to the implementation model anyway.
people are afraid to engage in speech that will be interpreted as political [...] nobody is actually making statements about my model of alignment deployment [...] try to present the model at a further disconnect from the specific events and actors involved
This seems pretty unfortunate insofar as some genuinely relevant real-world details might not survive the obfuscation of premature abstraction.
Example of such an empirical consideration (relevant to the “have some members that keep up with AI Safety research” point in your hopeful plan): how much overlap and cultural compatibility is there between AI-ethics-researchers-as-exemplified-by-Timnit-Gebru and AI-safety-researchers-as-exemplified-by-Paul-Christiano? (By all rights, there should be overlap and compatibility, because the skills you need to prevent your credit-score AI from being racist (with respect to whatever the correct technical reduction of racism turns out to be) should be a strict subset of the skills you need to prevent your AGI from destroying all value in the universe (with respect to whatever the correct technical reduction of value turns out to be).)
Have you tried asking people to comment privately?
After reading some of this reddit thread I think I have a better picture of how people are reacting to these events. I will probably edit or follow up on this post to follow up.
My high level takeaway is:
people are afraid to engage in speech that will be interpreted as political, so are saying nothing.
nobody is actually making statements about my model of alignment deployment, possibly nobody is even thinking about it.
In the edit or possibly in a separate followup post I will try to present the model at a further disconnect from the specific events and actors involved, which I am only interested in as inputs to the implementation model anyway.
This seems pretty unfortunate insofar as some genuinely relevant real-world details might not survive the obfuscation of premature abstraction.
Example of such an empirical consideration (relevant to the “have some members that keep up with AI Safety research” point in your hopeful plan): how much overlap and cultural compatibility is there between AI-ethics-researchers-as-exemplified-by-Timnit-Gebru and AI-safety-researchers-as-exemplified-by-Paul-Christiano? (By all rights, there should be overlap and compatibility, because the skills you need to prevent your credit-score AI from being racist (with respect to whatever the correct technical reduction of racism turns out to be) should be a strict subset of the skills you need to prevent your AGI from destroying all value in the universe (with respect to whatever the correct technical reduction of value turns out to be).)
Have you tried asking people to comment privately?