I can’t comment on why you weren’t invited [to the CFAR postmortem], because I was not involved with the decision-making for who would be invited; I just showed up to the event. Naively, I would’ve guessed it was because you didn’t work at CFAR (unless you did and I missed it?); I think only one attendee wasn’t in that category, for a broad definition of ‘work at’.
I have to rate all the time spent that didn’t result in improvements visible from the outside as nothing but costs paid to sustain internal narcissistic supply
This seems fair to me.
The uniformly positive things I’ve heard about “Don’t Create the Torment Nexus II: If Anyone Builds It, Everyone Dies” implies not much in the way of new perspective or even consensus that one is needed.
I think the main difference between MIRI pre-2022 and post-2022 is that pre-2022 had much more willingness to play along with AI companies and EAs, and post-2022 is much more willing to be openly critical.
There are other differences, and also I think we might be focusing on totally different parts of MIRI. Would you care to say more about where you think there needs to be new perspective?
If the transition from less to more disagreeableness doesn’t come along with an investigation of why agreeableness seemed like a plausible strategy and what was learned, then we’re still stuck trying to treat an adversary as an environment.
I think I agree with your statement; I assume that this happened, though? Or, at least, in a mirror of the ‘improvements visible from the outside’ comment earlier, the question is whether MIRI is now operating in a way that leads to successfully opposing their adversaries, rather than whether they’ve exposed their reasoning about this to the public.
Naively, I would’ve guessed it was because you didn’t work at CFAR (unless you did and I missed it?)
The attendee who told me about it never worked at CFAR, and neither did a couple other people I knew who went. Also I did guest-instruct at a CFAR workshop once.
I can’t comment on why you weren’t invited [to the CFAR postmortem], because I was not involved with the decision-making for who would be invited; I just showed up to the event. Naively, I would’ve guessed it was because you didn’t work at CFAR (unless you did and I missed it?); I think only one attendee wasn’t in that category, for a broad definition of ‘work at’.
This seems fair to me.
I think the main difference between MIRI pre-2022 and post-2022 is that pre-2022 had much more willingness to play along with AI companies and EAs, and post-2022 is much more willing to be openly critical.
There are other differences, and also I think we might be focusing on totally different parts of MIRI. Would you care to say more about where you think there needs to be new perspective?
If the transition from less to more disagreeableness doesn’t come along with an investigation of why agreeableness seemed like a plausible strategy and what was learned, then we’re still stuck trying to treat an adversary as an environment.
I think I agree with your statement; I assume that this happened, though? Or, at least, in a mirror of the ‘improvements visible from the outside’ comment earlier, the question is whether MIRI is now operating in a way that leads to successfully opposing their adversaries, rather than whether they’ve exposed their reasoning about this to the public.
The attendee who told me about it never worked at CFAR, and neither did a couple other people I knew who went. Also I did guest-instruct at a CFAR workshop once.