Fair, depending on your priors there’s definitely an important sense in which something like Reardon’s case is simpler:
https://frommatter.substack.com/p/the-bone-simple-case-for-ai-x-risk
I’d be interested in someone else trying to rewrite his article while removing in-group jargon and tacit assumptions!
Fair, depending on your priors there’s definitely an important sense in which something like Reardon’s case is simpler:
https://frommatter.substack.com/p/the-bone-simple-case-for-ai-x-risk
I’d be interested in someone else trying to rewrite his article while removing in-group jargon and tacit assumptions!