Consider writing up some of these impressions publicly. I would have talked to a couple people at the org before joining, but as someone who is almost completely disconnected from the “rationalist scene” physically, all I have to go on are what people say about the org on the internet. I don’t really have access to the second- or third-hand accounts that you probably have.
The signals I can remember updating against over the past were something like:
A few offhand tweets, before I stopped reading twitter, by EY, explaining that he was more-impressed-than-expected with their alignment research.
Some comments on LW I can’t remember alleging that Anthropic was started via an exodus of OpenAI’s most safety-conscious researchers.
Their website and general policy of not publishing capabilities research by default.
The identities and EA-affiliations of the funders. Jaan Tallinnn seems like a nice person. SBF was not a good person, but his involvement lets me infer certain things about their pitch/strategy that I can’t otherwise.
This post by evhub and just the fact that they have former MIRI researchers joining the team at all. Didn’t even remember this part until he commented.
In retrospect maybe these are some pretty silly things to base an opinion of an AGI organization on. But I guess you could say their marketing campaign was successful, and my cautious opinion was that they were pretty sincere and effective.
Consider writing up some of these impressions publicly. I would have talked to a couple people at the org before joining, but as someone who is almost completely disconnected from the “rationalist scene” physically, all I have to go on are what people say about the org on the internet. I don’t really have access to the second- or third-hand accounts that you probably have.
The signals I can remember updating against over the past were something like:
A few offhand tweets, before I stopped reading twitter, by EY, explaining that he was more-impressed-than-expected with their alignment research.
Some comments on LW I can’t remember alleging that Anthropic was started via an exodus of OpenAI’s most safety-conscious researchers.
Their website and general policy of not publishing capabilities research by default.
The identities and EA-affiliations of the funders. Jaan Tallinnn seems like a nice person. SBF was not a good person, but his involvement lets me infer certain things about their pitch/strategy that I can’t otherwise.
This post by evhub and just the fact that they have former MIRI researchers joining the team at all. Didn’t even remember this part until he commented.
In retrospect maybe these are some pretty silly things to base an opinion of an AGI organization on. But I guess you could say their marketing campaign was successful, and my cautious opinion was that they were pretty sincere and effective.