Chris Olah: One of the ideas I find most useful from @AnthropicAI‘s Core Views on AI Safety post (https://anthropic.com/index/core-views-on-ai-safety…) is thinking in terms of a distribution over safety difficulty. Here’s a cartoon picture I like for thinking about it:
I like this picture a lot. I personally place the peak of my distribution in between Apollo and P/NP. My lower tail does not go as low as Steam Engine, my upper tail does include impossible.
I like this picture a lot. I personally place the peak of my distribution in between Apollo and P/NP. My lower tail does not go as low as Steam Engine, my upper tail does include impossible.