Is this an admission that CFAR cannot effectively help people with problems other than AI safety?
Or an admission that this was their endgame all along now that they have built a base of people who like them… I’ve been expecting that for quite some time. It fits the modus operendi.
Is this an admission that CFAR cannot effectively help people with problems other than AI safety?
Or an admission that this was their endgame all along now that they have built a base of people who like them… I’ve been expecting that for quite some time. It fits the modus operendi.