Sure; unfortunately what’s happening at rationalist conferences is that frequently the most socially unaware/attention seeking person in the room is speaking up, in a way that does not actually contribute, and encourages other socially unaware people to go do it at other talks.
lc
If you attend a talk at a rationalist conference, please do not spontaneously interject unless the presenter has explicitly clarified that you are free to do so. Neither should you answer questions on behalf of the presenter during a Q&A portion. People come to talks to listen to the presenter, not a random person in the audience.
If you decide to do this anyways, you will usually not get audiovisual feedback from the other audience members that it was rude/cringeworthy for you to interject, even if internally they are desperate for you to stop doing it.
“Successionism” is such a bizarre position that I’d look for the underlying generator rather than try to argue with it directly.
It doesn’t take “8 weeks” to come up with a good example if you already understand the problem. Most of the “8 weeks” is spent realizing you didn’t understand the problem correctly enough to create the POC, or to propose a solution that works.
I bought two tickets for LessOnline, one for me and one for a friend. I used the same email for both, but unfortunately now we can’t login to the vercel app where we sign up for events! Any way an operator can help me here?
Twitter users are awful.
What is the context here?
I think the main problem is that the second cover looks really rushed.
Love the title
One large part of the AI 2027 piece is contigent on the inability of nation state actors to steal model weights. The authors’ take is that while China is going to be able to steal trade secrets, they aren’t going to be able to directly pilfer the model weights more than once or twice. I haven’t studied the topic as deeply as the authors but this strikes me as naive, especially when you consider side-channel[1] methods of ‘distilling’ the models based on API output.
@Daniel Kokotajlo @ryan_greenblatt Did you guys consider these in writing the post? Is there some reason to believe these will be ineffective, or not provide the necessary access that raw weights lifted off the GPUs would give?
- ^
I would normally consider “side channel attacks work” to be the naive position, but in the AI 2027 post, the thesis is that there exists a determined, well resourced attacker (China) that already has insiders who can inform them on relevant details about OpenAI infrastructure, and already understands how the models were developed in the first place.
- ^
are… are you sure you read the post you’re responding to?
We definitely read the same words!
I think Cremieux is an honest[1], truthseeking, and intelligent guest speaker, and I would be extraordinarily disappointed in the organizers if they disinvited him. I also have a very high opinion of LessOnline’s organizers, so I’m not particularly worried about them cowtowing to attempts to chill speech.
- ^
(In the sense of e.g. his work output being factually correct, not speaking to his character personally)
- ^
The full article sort of explains the bizarre kafkaesque academic dance that went on from 2020-2022, and how the field talked about these changes.
My hypothesis there is that we have systematized VC-backed YC-style founders. The rules are a lot easier to discover and follow, the track record there makes it a career path one can essentially plan on in a way that it wasn’t before, and the people who gate progress with money are there to reward those who internalize and follow those principles.
But a paved road would create more outsized successes, not fewer.
I think a better theory is that corp dev has systematized acquiring companies-that-look-like-Google very early. Early multiples on revenue for YC startups are insane. 23 year old founders coming out of the gate today with Google’s growth circa 2000 get a buyout offer 10-100x what Larry & Sergei did and sell immediately.
I loved the MASK benchmarks. Does anybody here have any other ideas for benchmarks people could make that measure LLM honesty or sycophancy? I am quite interested in the idea of building an LLM that you can trust to give the right answer to things like political questions, or a way to identify such an LLM.
Wow, I didn’t realize.
The two guys from Epoch on the recent Dwarkesh Patel podcast repeatedly made the argument that we shouldn’t fear AI catastrophe, because even if our successor AIs wanted to pave our cities with datacenters, they would negotiate a treaty with us instead of killing us. It’s a ridiculous argument for many reasons but one of them is that they use abstract game theoretic and economic terms to hide nasty implementation details
Formatting is off for most of this post.
“Treaties” and “settlements” between two parties can be arbitrarily bad. “We’ll kill you quickly and painlessly” is a settlement.
I make the simpler request because often rationalists don’t seem to be able to tell when this is (or at least tell when others can tell)