Thanks for speaking up! This is his post. As you can see, he presents a slide by @Ryan Kidd as evidence. I could just say, someone from Anthropic says this is a myth. But I’d like to know more! Does the claim of a funding overhang for AI safety depend on a particular contentious interpretation of certain data?
Thanks for speaking up! This is his post. As you can see, he presents a slide by @Ryan Kidd as evidence. I could just say, someone from Anthropic says this is a myth. But I’d like to know more! Does the claim of a funding overhang for AI safety depend on a particular contentious interpretation of certain data?