Hmm. There are lots of valuable advocacy targets besides literal regulation; advocates might try to pass laws, win court cases, sustain a boycott, amend a corporate charter, amend a voluntary industry standard at NIST, and so on. I’ll discuss several examples in post #5.
I suppose advocacy might also have some side benefits like teaching us more about politics, helping us bond as a community, or giving us a sense of efficacy, but it’s not obvious to me that any of those are of comparable importance to reducing the likelihood that AI developers release misaligned superintelligence.
Does that answer your question? If not, please let me know more about what you mean and I’ll try to elaborate.
Hmm. There are lots of valuable advocacy targets besides literal regulation; advocates might try to pass laws, win court cases, sustain a boycott, amend a corporate charter, amend a voluntary industry standard at NIST, and so on. I’ll discuss several examples in post #5.
I suppose advocacy might also have some side benefits like teaching us more about politics, helping us bond as a community, or giving us a sense of efficacy, but it’s not obvious to me that any of those are of comparable importance to reducing the likelihood that AI developers release misaligned superintelligence.
Does that answer your question? If not, please let me know more about what you mean and I’ll try to elaborate.
Yeah, that was a good response, thank you for your thoughts.