What we lack here is not so much a “textbook of all of science that everyone needs to read and understand deeply before even being allowed to participate in the debate”. Rather, we lack good, commonly held models of how to reason about what is theory, and good terms to (try to) coordinate around and use in debates and decisions.
Yudkowsky’s sequences [/Rationality: AI to Zombies] provide both these things. People did not read Yudkowsky’s sequences and internalize the load-bearing conclusions enough to prevent the current poor state of AI theory discourse, though they could have. If you want your posts to have a net odds-of-humanity’s-survival-improving impact on the public discourse on top of Yudkowsky’s, I would advise that you condense your points and make the applications to concrete corporate actors, social contexts, and Python tools as clear as possible.
Yudkowsky’s sequences [/Rationality: AI to Zombies] provide both these things. People did not read Yudkowsky’s sequences and internalize the load-bearing conclusions enough to prevent the current poor state of AI theory discourse, though they could have. If you want your posts to have a net odds-of-humanity’s-survival-improving impact on the public discourse on top of Yudkowsky’s, I would advise that you condense your points and make the applications to concrete corporate actors, social contexts, and Python tools as clear as possible.
[ typo: ‘Merman’ → ‘Mermin’ ]