People disagree with each other about things all the time, even after lots of evidence has accumulated. This isn’t exactly the first time. People aren’t perfect rationalists.
For my part, well, making the case that the Anthropic Consensus is wrong is not my top priority, I have lots of other things going on, but I’ve written a bunch about my views on alignment e.g. in AI 2027 and other work. I’d love it if Anthropic made the case for the Anthropic Consensus, in public; I could then write a blog post picking it apart. I’m happy that they are moving in this direction, at least, by writing more content in system cards and risk reports.
People disagree with each other about things all the time, even after lots of evidence has accumulated. This isn’t exactly the first time. People aren’t perfect rationalists.
For my part, well, making the case that the Anthropic Consensus is wrong is not my top priority, I have lots of other things going on, but I’ve written a bunch about my views on alignment e.g. in AI 2027 and other work. I’d love it if Anthropic made the case for the Anthropic Consensus, in public; I could then write a blog post picking it apart. I’m happy that they are moving in this direction, at least, by writing more content in system cards and risk reports.