Tangent:
If the Deepmind alignment team wanted to host something here I would probably give them a substantial discount! If the Anthropic alignment team wanted to host something here I would probably charge them a small but not enormous tax.
Are you saying that you believe Google Deepmind’s alignment team is doing better work than Anthropic’s alignment team? If so, that’s a noteworthy update for me. (My vague impression was that Anthropic is the least bad AI lab, but perhaps I’ve not paid enough attention.)
Strongly agree, especially with your latter point 3. Fwiw, I used to work at Metaculus. Copy-pasting something I wrote after I left (which was in response to a Slack post about this study):