Distillation Contest—Results and Recap

This post:

  • Announces the winning submissions to the Distillation Contest

  • Gives further insight into the scoring process for the contest

  • Examines the effectiveness of our advertising strategies

  • Gives a brief impact estimate of the contest

  • Shares my advice for community builders who are planning to run contests

Notes:

A huge thank you to Akash and all of the judges for this contest! This wouldn’t have been possible without their work. I’m definitely not perfect! I imagine there are better ways to advertise, run, and score a contest, so I wanted to be transparent about my process so that other people could make suggestions if they have ideas for improvements.

Want to use the materials from the Distillation Contest to run your own version? I’m in the process of creating a platform for EA contests and will soon have a demo site up! I’m planning to upload all of the Distillation contest materials into a “bundle” so that anyone can host their own contest easily. Once the site is up (hopefully in the next couple of weeks), feel free to use anything there and iterate to make better resources. If you have your own EA contest resources and would like to make them available to other people, I’d love to add them to the notion!

Cross-posted on the EA Forum.

Winners

The submission winning first place in the Distillation Contest is Understanding Selection Theorems by UC Berkeley’s Adam Khoja – distilling John Wentworth’s Selection Theorems: A Program for Understanding Agents. Our second-place winner is The Geometry of Adversarial Perturbations by Gabriel Wu from Harvard University – a distillation of Universal Adversarial Perturbations.

We’ve granted 15 other prizes, six $500 awards, and nine $250 awards, plus three honorable mentions. The six $500 winners are listed below, with their distillations linked to their names. You can find the other finalists, and their distillations, on the EA Berkeley distillation contest winners page.

Callum McDougall, Cambridge

Jasper Day, University of Edinburgh (has not yet given permissions to share submission)

Harrison Gietz, Louisiana State University

Sasha Sato, UC Berkeley

Chinmay Deshpande, Harvard University

Yash Dave, UC Berkeley

Scoring

Each submission was scored by two judges (of which we had five total, all of whom actively work in the alignment space). Our rubric took into account the submission’s Depth of Understanding, Clarity of Presentation, Concision/​Length, Originality of Insight, Accessibility, and two extra-subjective measures: X-Factor and Subjective Rating, explained below.

X-Factor

  • Some submissions may end up scoring low despite being amazing because they are exceptional for a quality that’s missed by the factors listed above (they make really great applications of the material, they synthesize multiple sources, they have unexpectedly unique and useful dimensions, etc.). You can grant as many additional points to a distillation as you’d like when you score with this X-Factor zone. Most papers will not have an X-factor effect, so do not feel required to give out X-Factor points.

Subjective Rating

  • Ignore the rubric. Assume you just had to rate the submission on a scale from 1-10 (including decimals). What rating would you give this submission?

Once the judges had scored their submissions, we found the average score for that judge and then divided each submission score by the average to get an adjusted score. If the average score was 40, a score of 50 would turn into a 1.25.

There were 19 submissions that were rated above average by both of their judges. Given that this was already above the number of submissions we said we’d award, we moved on to mostly comparing these submissions to one another. We also looked at “controversial” submissions– submissions in which one judge rated a distillation above average and the other judge rated the submission below average–to see if there were any high-quality responses we should give a prize.

One of these “controversial” submissions was rated so highly by one judge (and barely below average for another) that its average was high enough to receive an award. It was later realized that this was an error in the record of judges’ scores and the submission received a $500 award.

The above-average (by both judges) scores were then put in descending order, in order of adjusted score. They were labeled by the place they came in (1, 2, 3…). Since there were two scored versions of each, I took both place numbers and added them. Then, I sorted the submissions by this newly added score and the submissions with the lowest total scores won.

Advertising

I currently believe that the best direct outreach methods for an EA contest are through reminders from an existing EA or AI Safety group, in-class announcements, and flyering. The best indirect methods seem to be advertising to other ea group organizers and newsletters or blogs that might advertise your contest.

Who submitted and how did they hear about the contest?

  • We received over 50 submissions from 26 universities.

    • Some of these were revisions or accidental double submissions, so after removing superfluous submissions we had 48.

  • 165 people filled out our interest form.

  • On the interest form, people were asked how they heard about the contest. In descending order of popularity, the answer was local ea/​ai safety group, digital advertising, other, ea forum, through a friend, flyering on campus, in-class announcement, reddit, discord server announcement.

  • On the submission form, when asked how they heard about the contest, the majority of people who didn’t attend UC Berkeley said they heard about the contest through digital advertising, their local EA/​AI Safety group, a friend, an in-class announcement, other, or an ACX announcement (in descending order of popularity).

  • On the UC Berkeley campus, we put up flyers on walls of common spaces for CS and math students. We also sent out reddit, discord, and club messages about the contest. We advertised on EA Berkeley’s facebook page and at our meetings. Nearly half of the Berkeley students who submitted to the distillation contest at found the contest through flyering.

    • 30 of the students on the interest form were from Berkeley and 5 of the final submissions were from Berkeley.

    • One of the students in my group was sent photos of flyers by three separate friends who thought he might be interested! This indicates to me that flyering was again a strong tactic.

Other notes on advertising

  • I hired two people to work part-time on advertising the contest at Berkeley. One had no background context in EA and the other had little background. Both of these students had a background in CS at the university, so they gathered info on discords, clubs, and reddit threads that Berkeley CS students use and then sent out drafts to those clubs. They also led tabling, flyering, and inter-club communication efforts for the contest.

  • We found that tabling was completely ineffective, even with a banner and flyers. This was likely so bad because the advertising happened at a time late in the semester when students were studying for finals. Even so, I would still highly recommend taping up flyers instead of setting up a table and flyering (if your campus is similar to Berkeley’s).

  • Other ways we considered advertising that didn’t end up happening:

    • Give in-class announcements (other schools did this and it seemed successful!)

    • Create chalk art and make ads around campus

    • Create an instagram and pay for our posts to be advertised. (We made an instagram but didn’t get to promote it in time before finals began)

Impact

I’d imagine I invested something like 60-80 hours into this contest over the past few months and other people invested a total of something like 70 hours (advertizing, getting funding, collaborating, scoring submissions). If that found us a few more people who counterfactually pursue AI Safety careers or AI Safety researchers who gain counterfactual opportunities because they received an award in this contest, that seems worth the time to me. And it seems like those things are, at least somewhat, on track to happen:

Promising people found EA and my university’s EA group through contests! (Yay nerdsniping!)

  • Before the distillation contest, I ran a smaller AI Safety contest at Berkeley. Even though only 7 people submitted, I found two talented CS people who had never interacted with the AI Safety before and are interested in learning more about AI Safety! By the end of the Distillation Contest, the majority of UC Berkeley submissions were from people who hadn’t interacted with the club before (one of these people even won a $500 prize!). A couple of these new people have since joined our summer reading groups or attended our socials, and one is going to EAG in a few days!

  • To investigate the contest’s impact further, I’m planning to send out a feedback form to organizers from other universities that had student submissions to see if they’ve had similar experiences with the contest attracting new people or increasing club engagement.

I’ve already had multiple occasions where I’ve been asked to recommend promising students for mentorship or opportunities.

  • People have specifically asked for the information of the winners of this contest in order to invite them to events.

  • In contexts where I’ve been asked for general recommendations, I’ve been able to easily provide evidence for why I believe certain people are agentic and have potential.

As someone who didn’t study CS, reading these distillations has been helpful to me. A couple of other people who have read the submissions have said the same :) You can read the linked distillations and see if this happens to you too!

Community Building Advice

These are things I’ve either learned about contest community building or things I wish I had known when I started creating contests:

  • Create an interest form!

    • This lets you keep a record of people who thought your idea was cool and also lets you compare with how many people actually submitted to the contest. You can also advertise future events to people who fill out the interest form.

    • I would recommend linking the interest form in your advertisements, not the submission form. In the first contest I ran, I forgot to make an interest form and had no way of contacting people before the contest closed.

  • Actually follow up with updates on the interest form!

    • People be more likely to submit if you send reminders before the contest ends. I’d recommend sending out updates every 2ish weeks while the contest is open, then reminders a week before the deadline, 3 days before the deadline, and the day of the deadline.

  • Tabling was our worst advertising technique. Putting up flyers is less costly for your own time and, if done well, gets good exposure.

    • Tabling seems worth it if there are special tabling events happening, such as start-of-the-year club fairs.

  • Hiring people to advertise was helpful! And takes some management time.

    • I spent time setting weekly goals for both people who did advertising, as well as asking them to fill out weekly self-evaluations that I would read. They kept on track and I could easily follow their updates on tasks. The job definitely wasn’t management-free but it was super helpful not to be the only one working on advertising.

  • Allot more time for judging than you think.

    • It can be tough to coordinate schedules and find people to score things in a short timescale, especially if they’re doing AI Safety work themselves.

I also followed up with John Wentworth, whose post Call for Distillers inspired the contest, for feedback. The creator of the winning distillation was gauged to be about the right level for SERI MATS. This indicates to me that the Distillation Contest is able to distinguish people who could be talented at AI Safety research to at least some degree.

Takeaway

In general, it seems to me that contests are a pretty low-stakes way to:

  • Find new people who could contribute to AI Safety

  • Give legible rewards to people who show initiative and could make contributions to AI Safety research

I also think contests could be used in other cause areas and have many potential uses (upskilling, producing valuable outputs, etc.), but I think those vary depending on the contest itself.

Thanks for reading!