Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
Linda Linsefors(Linda Linsefors)
A statement I could truthfully say:
”As a AI safety community member, I predict I and others will be uncomfortable with 80k if this is where things end up settling, because of disagreeing. I could be convinced otherwise, but it would take extraordinary evidence at this point. If my opinions stay the same and 80k’s also are unchanged, I expect this make me hesitant to link to and recommend 80k, and I would be unsurprised to find others behaving similarly.”But you did not say it (other than as a response to me). Why not?
I’d be happy for you to take the discussion with 80k and try to change their behaviour. This is not the first time I told them that if they list a job, a lot of people will both take it as an endorsement, and trut 80k that this is a good job to apply for.
As far as I can tell 80k is in complete denial on the large influence they have on many EAs, especially local EA community builders. They have a lot of trust, mainly for being around for so long. So when ever they screw up like this, it causes enormous harm. Also since EA have such a large growth rate (at any given time most EAs are new EAs), the community is bad at tracking when 80k does screw up, so they don’t even loose that much trust.
On my side, I’ve pretty much given up on them caring at all about what I have to say. Which is why I’m putting so litle effort into how I word things. I agree my comment could have been worded better (with more effort), and I have tried harder in the past. But I also have to say that I find the level of extreme politeness, lot’s of EA show towards high status orgs, to be very off-putting, so I never been able to imitate that style.
Again, if you can do better, please do so. I’m serious about this.
Someone (not me) had some success at getting 80k to listen, over at the EA forum version of this post. But more work is needed.
(FWIW, I’m not the one who downvoted you)
I also have limited capacity.
Actually, it’s probably true if you don’t confound for intelligence.
Autism is negatively correlated with intelligence, and if you’re not very smart, everything gets harder. But I think it’s wrong to see low intelligence as part of Autism. And even if you disagree, it’s weird to classify a general intelligence problem as a specific social deficit problem.
But if you compare high functioning Autists with neurotypicals, in realistic enough settings, I’m convinced autists will be better at understanding autists than neurotypicals are at understanding autists. Although “realistic enough” might require the autists enough time to interact to spot each other as same type of person.
I don’t put a lot of weight here on academic studies, over just all of my life experience. But in case you do: I did here of a study where autist worked better with other autists than neurotypicals with neurotypicals. I don’t have the liks, sorry. Just my memory of someone I trust telling me about it.
The reason I don’t trust academic studies on this topic, is that is really really hard to do them well, so most of them are not done well.
only about equal to them at understanding fellow autists
I do not believe this.
Empathy is a useful tool, I use it too to generate initial guesses about people. But I’m also aware that it’s untrustworthy. In my experience it’s common for neurotypicals to fail at this last step.
Neurotypicals are more accurate for other neurotypicals. Autists are more accurate for other autists.
Since there are more neurotypicals (are there? or just more people pretending to be?*) neurotypicals are statistically more often correct**. But I would still not claim that that is a higher level of social skill. Having higher skill level at a more common task, is not the same thing as having higher over all skill. This detail is very important for understanding autism.
* Not saying that autism is the majority. But there are more neurotypes out there, and probably lots that are better at masking than autists.
** In a statistically representative environment. When autists are free to self segregate, we no longer have problems. This is the main point actually. If it was just a one dimensional skill issue, concentrating lots of autists, would go terribly, since no one have social skills, but instead is’s great. All the people I get along with the best are other autists (officially or self diagnosed).that’s, IMO, the mechanism by which empathy works
Empathy is a very unreliable source of information though. E.g. I feel empathy with my plushies.
Temporarily deleted since I misread Eli’s comment. I might re-post
However, we don’t conceptualize the board as endorsing organisations.
It don’t matter how you conceptualize it. It matters how it looks, and it looks like an endorsement. This is not an optics concern. The problem is that people who trust you will see this and think OpenAI is a good place to work.
These still seem like potentially very strong roles with the opportunity to do very important work. We think it’s still good for the world if talented people work in roles like this!
How can you still think this after the
wholesafety team quit? They clearly did not think these roles where any good for doing safety work.Edit: I was wrong about the whole team quitting. But given everything, I still stand by that these jobs should not be there with out at leas a warning sign.
As a AI safety community builder, I’m considering boycotting 80k (i.e. not link to you and reccomend people not to trust your advise) until you at least put warning labels on your job board. And I’ll reccomend other community builders to do the same.
I do think 80k means well, but I just can’t reccomend any org with this level of lack of judgment. Sorry.
This post reminds me of the Word2vec algebra.
E.g. “kitten”—“cat” + “dog” “puppy”
I expect that this will be true for LLM token embeddings too. Have anyone checked this?
I also expect something similar to be true for internal LLM representations too, but that this might be harder to verify. However, maybe not, if you have interpretable SAE vectors?
Ah, ok. I did not pick up on that. Thanks for clarifying.
Thanks :)
The EA SummerCamp takes place the next weekend
I’ve not been to any of these, but would like to. Is there any info up yet for this years EA SummerCamp?
Second part being “there will be an AISC10”?
Very sure.
As long as me and Remmelt are still alive and healthy a couple of months from now, then we’re doing it.
Remmelt have organised 8 previous AISCs, I’ve been part of 3 of those. We know what we are doing. We know we can rely on each other, and we want to do this.
We just needed to make sure we have money to live and eat and such, before we could commit to running a next camp. But we have received the money now, so that’s all good. Manifund have sent us the money, it’s in our bank accounts.
I’ll bet anyone who like, that there will be an AISC10 at 1:10 odds in your favour. I’m much more confident that that, but if you give me worse odds, then I don’t think can be bothered about it.
AISC9 has ended and there will be an AISC10
I disagree. In verbal space MARS and MATS are very distinct, and they look different enough to me.
However, if you want to complain, you should talk to the organisers, not one of the participants.Here is their website: MARS — Cambridge AI Safety Hub
(I’m not involved in MARS in any way.)
I’ve now updated the event information to include summaries/abstracts for the projects/talks. Some of these are still under construction.
Ok, you’re right that this is a very morally clear story. My bad for not knowing what’s typical tabloid storry.
Missing kid = bad,
seems like a good lesson for AI to learn.
Why?
The reason for the ban is pretty crux-y. Are Lighitcone banned because OpenPhil dislikes you, or because you’re too close so that would be a conflict of interests, or something else.