Website: https://aelerinya.me
Lucie Philippon
Its population advantage had vastly reduced by the end of the 19th century already, as France was the first country to go through a demographic transition in the 18th century. AFAIK its relative population compared to other western countries has been stable since WWII.
(If this seems mean, I think I have equally mean critiques about how every other country is screwing up. I’m just calling out France here because the claims in the post are about France.)
Yeah I realized this post did make me sound like a patriot lol. I’m not convinced of France’s relevance, nor its irrelevance. I’m writing those posts for myself to figure out whether France matters, and to help other people working in AI policy to have a good model of France’s motivations in the AI race.
Yeah, agree with most of this. I added a not saying that it’s the narrative that I think is shaping France actions, rather than my view of the situation.
I agree that Mistral cannot come to parity on technical prowess. I think that from the sovereignty angle, it still can be a success if they make models that are actually useful for industry and are trusted by French national security agencies, which seems to be more like Mistral’s goal lately.
Strong agree on not expecting France to be a significant player in AI development. However, I expect that France seeing itself as in the race is a big part of the current AI investment push. Also, France might not be in the race, but it could still actually matter whether they have national compute resources and expertise. France only developed nuclear weapons fourth, but it still matters for its sovereignty to have them.
Also agree on the lameness of the tech scene in France. I was working in a world leading crypto startup started in France, and the founder still ended up moving from Paris to London, to get closer to a proper financial district.
Maybe people avoid looking at that because realizing they aren’t in love with their partner would be very inconvenient.
This definitely happened for me. I wasted so much time in relationships that were not valuable to me. Thanks for writing it so crisply.
LOCATION CHANGE
Due to the Thunderstorm alert tomorrow, I’m moving the location to Ground Control, near Gare de Lyon.
Looking forward to meet you all tomorrow!
Updated invite: https://partiful.com/e/ZumH1DtmgOxLqSFy34jL
Yeah, I’ve ramped up slowly and did not get injured :) Although, I did not do any long runs in those yet.
Do you think the proper running technique would be different for barefoot shoes? I’ve heard that, with barefoot running, it’s better to go forefoot first to use the arch muscles to absorb the shock, or something like that.
What’s your opinion on starting to run with barefoot shoes? I’ve recently started using barefoot shoes due to overpronation and ankle instability (worked very well), so I’ve been a bit hesitant to use narrow cushioned shoes again for running. Any resources you recommend on this topic?
Thank you, River, for being the actual lawyer here.
I guess much of my confusion stemmed from thinking a 501c3 was a specific corporate structure from the way the podcast described it, whereas you seem to say that it’s a tax status that you put on top of any existing corporate structure. In France, the tax advantages are just part of the Association structure.
Governance structure: I got the wrong impression from the podcast, where Andrew said that making a board and having board meetings was part of the steps to start a church. Probably, that was specific to the Californian corporate structure he was using, which I thought was a characteristic of 501c3.
Liability: Same for this. It makes sense that 501c3 being only a tax status has no bearing on liability.
Constitutional protection: Here I was thinking about the part of the podcast where Andrew said that one advantage of incorporating as a church was that you had to do far less reporting of your activities to the government than other 501c3.
Lobbying: Yeah, makes sense that one would just separate the two organizations. I agree that the main advantage in French law is that you keep the tax-deductible donations when lobbying, as long as the lobbying supports the general interest mission.
Board members: In France, it’s common for the president of the board and organization leader role to be the same person (the Président-directeur général). In most small Associations I’ve been part of, the president of the board was in fact the leader of the organization, so the restriction on getting paid was in fact restricting. If the default approach in the US is to separate them, then it makes sense that it’s not a “workaround”.
Interestingly, that means that Sam Altman being OpenAI’s CEO and on the OpenAI board is actually a surprising point of power concentration if American organizations are not like that by default.
Note that the documentation says they’ll aim to recruit 1-3 nationals from each EU country (plus Norway, Iceland and Liechtenstein). As far as I understood, it does not require them to be living in their home country at the time of applying. Therefore, people from small European countries have especially good odds.
Note also that the gender balance goal would also increase the chance for any women applying.
Thanks for sharing! I’ve been sharing it with people working in AI Safety across Europe.
Do you know if anyone is doing a coordinated project to reach out to promising candidates from all EU countries and encourage them to apply? I’m worried that great candidates may not hear about it in time.
Most associations I know never got a rescrit fiscal and never got audited. Also, you would only get penalties if you were not actually eligible for tax deductible donations, which is usually obvious whether it’s the case or not, so no need for government confirmation.
Do you know some associations who did? How’s the process?
Thank you for the post! I’ve regularly pointed out the spurious negative correlations from stratification in conversations, but never had a link to point to for an explanation.
Case in point, my smartest close friend is also the least hard-working. Sometime I worry he’ll find his way towards reliable executive function and leave me behind :’)
IMO it makes sens for an event venue to not list the events it’s hosting, especially when they’re run by orgs unaffiliated with Lightcone Infrastructure. I expect Vitalist Bay organizers to not want to be straightforwardly associated with all the other events running at Lighthaven.
AFAIK most events running at Lighthaven related to the rationalist community are advertised on LessWrong. See the Lighthaven tag for some examples.
I’m looking for websites tracking the safety of the various frontier labs. For now, I found those:
Seoul Commitment Tracker → Whether frontier AI companies have published their “red line” risk evaluation policy, in accordance with their commitments at the Seoul AI Action Summit
AI Lab Watch → Tracker of actions frontier AI companies have taken to improve safety
Safer AI Risk Management Ratings → Ratings of frontier AI companies’ risk management practices
Do you know of any other?
I’m currently writing a grant application to build websites specifically tracking how frontier AI Labs are fulfilling the EU Code of practice, how close are frontier models from each lab from various red lines, and how robust are the evaluation methodologies of each lab (probably as separate websites). I’d be interested to any pointer to existing work on this.
IMO Janus mentoring during MATS 3.0 was quite impactful, as it led @Quentin FEUILLADE—MONTIXI to start his LLM ethology agenda and to cofound PRISM Eval.
I expect that there’s still a lot of potential value in Janus work that can only be realized through making it more legible to the rest of the AI safety community, be it mentoring, posting on LW.
I wish someone in the cyborgism community would pick up the ball of explaining the insights to outsiders. I’d gladly pay for a subscription to their Substack, and help them find money for this work.
Yeah last post was two years ago. The Cyborgism and Simulators posts improved my thinking and AI strategy. The void may become one of those key posts for me, and it seems it could have been written much earlier by Janus himself.
AFAIK Janus does not publish posts on LessWrong to detail what he discovered and what it implies for AI Safety strategy.
Positive update on the value of Janus and its crowd.
Does anyone have an idea of why those insights don’t move to the AI Safety mainstream usually? It feels like Janus could have written this post years ago, but somehow did not. Do you know of other models of LLM behaviour like this one, that still did not get their “notalgebraist writes a post about it” moment?
Agreed that the current situation is weird and confusing.
The AI Alignment Forum is marketed as the actual forum for AI alignment discussion and research sharing. However, it seems that the majority of discussion shifted to LessWrong itself, in part due to most people not being allowed to post on the Alignment Forum, and most AI Safety related content not being actual AI Alignment research.
I basically agree with Reviewing LessWrong: Screwtape’s Basic Answer. It would be much better if AI Safety related content had its own domain name and home page, with some amount of curated posts flowing to LessWrong and the EA Forum to allow communities to stay aware of each other.
Signals of Competence
Draft post made during Inkhaven. Interested in feedback.
Signals of Competence is the model I use to advise friends on how to build career capital.
When deciding who to hire, an organization will assess the competence of the candidates by looking at various signals that they sent in their CV, cover letter, interview or work test.
Those signals are a point on those two main dimensions:
Reach/Breadth: how wide of a population understands the signal. Harvard degree is high. Having made a specific niche software is narrow.
Detail/Depth: how much of a detailed picture of your competence does it paint. Degree only signals you’re in the degree-holder distribution, it’s very shallow. A portfolio shows specific things you made by yourself, it’s deeper. Having worked with someone for years gives them a detailed model of how you work and what you’re good at, it’s as deep as it gets.
Signals of Competence are of two types:
Individual: Historically, all signals of competence were of the form “Here is an example of my work” or “I worked with you or your friend before so you trust me”. Those signals are usually detailed, but as they’re specific to you, they don’t work beyond your social circles.
Institutional: Institutions were built to create signal of competence that are portable (e.g. degrees, certifications). The trust is no more in the specific individual, but in the institution who attests of their competence
Small and Big organizations generally care about different signals
In big organizations, recruiters usually don’t have the technical knowledge to assess detailed signals of competence. They will rely on broad ones, like degrees. To get those, you have to max out the institutional signals, who will be understandable by an MBA without domain specific knowledge.
In small organization, the cost of a bad hire is much higher, and the recruiter will be much more technical. There, they will want to get the most detailed signal they can, to reduce the risk you’ll tank the company. You’ll want to have specific personal connection or direct experience working on their topic.
Your CV is the collection of those signals. There are three ways you can improve how good it is:
More detailed signals: build up a portfolio in the specific industry you want to get in, work with people who can attest of your competence and recommend you, publish in the recognized venues of the field → increase your chances to break into a specific field
More widely legible signals: get to a top university, work in big tech, anything with wide brand recognition → increase your ability to pivot to any industry
Push the Pareto Frontier of breath/depth: being maintainer of a famous open source project is a widely understood signal in tech which gives you lots of credibility, even though it’s not worth anything outside tech
Recruiters look for three kind of signals:
Technical Competence: are you good at the specific thing the org is doing?
Executive Function: are you a reliable person that will get shit done on time?
Culture Fit: are you someone that they’ll enjoy working with or will you make them miserable?
Make sure you signal your competence in those three.