Website: https://aelerinya.me
Lucie Philippon
Edited to say it is not your position. I’m sorry for having published this comment without checking with you.
EDIT: Originally I said that was my best understanding of Mikhail’s point. Mikhail has told me it was not his point. I’m keeping this comment as that’s a point that I find interesting personally.
Before Mikhail released this post, we talked for multiple house about the goal of the article and how to communicate it better. I don’t like the current structure of the post, but I think Mikhail has good arguments and has gathered important data.
Here’s the point I would have made instead:
Anthropic presents itself as the champion of AI safety among the AI companies. People join Anthropic because of their trust that the Anthropic leadership will take the best decisions to make the future go well.
There have been a number of incidents, detailed in this post, where it seems clear that Anthropic went against a commitment they were expected to have (pushing the frontier), where their communication was misleading (like misrepresenting the RAISE bill), or where they took actions that seem incongruous with their stated mission (like accepting investment from Gulf states).
All of those incidents most likely have explanations that were communicated internally to the Anthropic employees. Those explanations make sense, and employees believe that the leadership made the right choice.
However, from the outside, a lot of those actions look like Anthropic gradually moving away from being the company that can be trusted to do what’s best for humanity. It looks like Anthropic doing whatever it can to win the race even if it increases risks, like all the other AI companies. From the outside, it looks like Anthropic is less special than it seemed at first.
There are two worlds compatible with the observations:
One where Anthropic is still pursuing the mission, and those incidents were just the best way to pursue the mission. The Anthropic leadership is trustworthy, and their internal explanations are valid and represent their actual view.
One where the Anthropic leadership is not reliably pursing the mission anymore, where those incidents are in fact evidence of that, but where the leadership is using its capacity of persuasion and the fact that it has access to info employees don’t have, to convince them that it was all for the mission, no matter the real reasons
In the second world, working at Anthropic would not reliably improve the world. Anthropic employees would have to evaluate whether to continue working there in the same way as they would if they worked at OpenAI or any other AI company.
All current and potential Anthropic employees should notice that from the outside, it sure does look like Anthropic is not following its mission as much as it used to. There are two hypotheses that explain it. They should make sure to keep tracking both of them. They should have a plan of what they’ll do if they’re in the least convenient world, so they can face uncomfortable evidence. And, if they do conclude that the Anthropic leadership is not following Anthropic’s mission anymore, they should take action.
Nominated. One of the posts that changed my life the most in 2024. I’ve eaten oatmeal at least 50 times since then, and have enjoyed the convenience and nutrition.
I’ll go buy some more tomorrow
Nominated. I used the calculator linked in this post to determine whether to take up insurance since then.
Nominated. The hostile telepath problem immediately entered my library of standards hypothesis to test for debugging my behavior and helping others do so, and sparked many lively conversations in my rationalist circles.
I’m glad I reread it today.
Draft post made during Inkhaven. Interested in feedback.
Signals of Competence is the model I use to advise friends on how to build career capital.
When deciding who to hire, an organization will assess the competence of the candidates by looking at various signals that they sent in their CV, cover letter, interview or work test.
Those signals are a point on those two main dimensions:
Reach/Breadth: how wide of a population understands the signal. Harvard degree is high. Having made a specific niche software is narrow.
Detail/Depth: how much of a detailed picture of your competence does it paint. Degree only signals you’re in the degree-holder distribution, it’s very shallow. A portfolio shows specific things you made by yourself, it’s deeper. Having worked with someone for years gives them a detailed model of how you work and what you’re good at, it’s as deep as it gets.
Signals of Competence are of two types:
Individual: Historically, all signals of competence were of the form “Here is an example of my work” or “I worked with you or your friend before so you trust me”. Those signals are usually detailed, but as they’re specific to you, they don’t work beyond your social circles.
Institutional: Institutions were built to create signal of competence that are portable (e.g. degrees, certifications). The trust is no more in the specific individual, but in the institution who attests of their competence
Small and Big organizations generally care about different signals
In big organizations, recruiters usually don’t have the technical knowledge to assess detailed signals of competence. They will rely on broad ones, like degrees. To get those, you have to max out the institutional signals, who will be understandable by an MBA without domain specific knowledge.
In small organization, the cost of a bad hire is much higher, and the recruiter will be much more technical. There, they will want to get the most detailed signal they can, to reduce the risk you’ll tank the company. You’ll want to have specific personal connection or direct experience working on their topic.
Your CV is the collection of those signals. There are three ways you can improve how good it is:
More detailed signals: build up a portfolio in the specific industry you want to get in, work with people who can attest of your competence and recommend you, publish in the recognized venues of the field → increase your chances to break into a specific field
More widely legible signals: get to a top university, work in big tech, anything with wide brand recognition → increase your ability to pivot to any industry
Push the Pareto Frontier of breath/depth: being maintainer of a famous open source project is a widely understood signal in tech which gives you lots of credibility, even though it’s not worth anything outside tech
Recruiters look for three kind of signals:
Technical Competence: are you good at the specific thing the org is doing?
Executive Function: are you a reliable person that will get shit done on time?
Culture Fit: are you someone that they’ll enjoy working with or will you make them miserable?
Make sure you signal your competence in those three.
Its population advantage had vastly reduced by the end of the 19th century already, as France was the first country to go through a demographic transition in the 18th century. AFAIK its relative population compared to other western countries has been stable since WWII.
(If this seems mean, I think I have equally mean critiques about how every other country is screwing up. I’m just calling out France here because the claims in the post are about France.)
Yeah I realized this post did make me sound like a patriot lol. I’m not convinced of France’s relevance, nor its irrelevance. I’m writing those posts for myself to figure out whether France matters, and to help other people working in AI policy to have a good model of France’s motivations in the AI race.
Yeah, agree with most of this. I added a not saying that it’s the narrative that I think is shaping France actions, rather than my view of the situation.
I agree that Mistral cannot come to parity on technical prowess. I think that from the sovereignty angle, it still can be a success if they make models that are actually useful for industry and are trusted by French national security agencies, which seems to be more like Mistral’s goal lately.
Strong agree on not expecting France to be a significant player in AI development. However, I expect that France seeing itself as in the race is a big part of the current AI investment push. Also, France might not be in the race, but it could still actually matter whether they have national compute resources and expertise. France only developed nuclear weapons fourth, but it still matters for its sovereignty to have them.
Also agree on the lameness of the tech scene in France. I was working in a world leading crypto startup started in France, and the founder still ended up moving from Paris to London, to get closer to a proper financial district.
Maybe people avoid looking at that because realizing they aren’t in love with their partner would be very inconvenient.
This definitely happened for me. I wasted so much time in relationships that were not valuable to me. Thanks for writing it so crisply.
France is ready to stand alone
ACX/LW October Paris Meetup
LOCATION CHANGE
Due to the Thunderstorm alert tomorrow, I’m moving the location to Ground Control, near Gare de Lyon.
Looking forward to meet you all tomorrow!
Updated invite: https://partiful.com/e/ZumH1DtmgOxLqSFy34jL
Paris – ACX Meetups Everywhere Fall 2025
Yeah, I’ve ramped up slowly and did not get injured :) Although, I did not do any long runs in those yet.
Do you think the proper running technique would be different for barefoot shoes? I’ve heard that, with barefoot running, it’s better to go forefoot first to use the arch muscles to absorb the shock, or something like that.
What’s your opinion on starting to run with barefoot shoes? I’ve recently started using barefoot shoes due to overpronation and ankle instability (worked very well), so I’ve been a bit hesitant to use narrow cushioned shoes again for running. Any resources you recommend on this topic?
Thank you, River, for being the actual lawyer here.
I guess much of my confusion stemmed from thinking a 501c3 was a specific corporate structure from the way the podcast described it, whereas you seem to say that it’s a tax status that you put on top of any existing corporate structure. In France, the tax advantages are just part of the Association structure.
Governance structure: I got the wrong impression from the podcast, where Andrew said that making a board and having board meetings was part of the steps to start a church. Probably, that was specific to the Californian corporate structure he was using, which I thought was a characteristic of 501c3.
Liability: Same for this. It makes sense that 501c3 being only a tax status has no bearing on liability.
Constitutional protection: Here I was thinking about the part of the podcast where Andrew said that one advantage of incorporating as a church was that you had to do far less reporting of your activities to the government than other 501c3.
Lobbying: Yeah, makes sense that one would just separate the two organizations. I agree that the main advantage in French law is that you keep the tax-deductible donations when lobbying, as long as the lobbying supports the general interest mission.
Board members: In France, it’s common for the president of the board and organization leader role to be the same person (the Président-directeur général). In most small Associations I’ve been part of, the president of the board was in fact the leader of the organization, so the restriction on getting paid was in fact restricting. If the default approach in the US is to separate them, then it makes sense that it’s not a “workaround”.
Interestingly, that means that Sam Altman being OpenAI’s CEO and on the OpenAI board is actually a surprising point of power concentration if American organizations are not like that by default.
Note that the documentation says they’ll aim to recruit 1-3 nationals from each EU country (plus Norway, Iceland and Liechtenstein). As far as I understood, it does not require them to be living in their home country at the time of applying. Therefore, people from small European countries have especially good odds.
Note also that the gender balance goal would also increase the chance for any women applying.
Thanks for sharing! I’ve been sharing it with people working in AI Safety across Europe.
Do you know if anyone is doing a coordinated project to reach out to promising candidates from all EU countries and encourage them to apply? I’m worried that great candidates may not hear about it in time.
In https://secularsolstice.vercel.app/feedback, “2022 (Or, “Where Ray’s Coming From”)” has the lyrics of “Five Thousand Years”, which seems incorrect.