For those who missed it, all of the AI Safety Arguments that won the competition can be found here, randomly ordered.
If you or anyone you know is ever having any sort of difficulty explaining anything about AI safety to anyone, these arguments are a good place to look for inspiration; other people have already done most of the work for you.
But if you really want to win this current contest, I highly recommend using bullet-pointed summaries of the 7 works stated in this post, as well as deeply reading the instructions instead of skimming them (because this post literally tells you how to win).
I’m not sure what you mean by “using bullet-pointed summaries of the 7 works stated in the post”. If you mean the past examples of good materials, I’m not sure how good of an idea that is. We don’t just want people to be rephrasings/”distillations” of single pieces of prior work.
I’m also not sure we literally tell you how to win, but yes, reading the instructions would be useful.
I meant, reading them and making bullet pointed lists of all valuable statements, in order to minimize the risk of forgetting something that could have been a valuable addition. You make a very good point that there’s pitfalls with this strategy, like having a summary of too many details when the important thing is galaxy-brain framing that will demonstrate the problem to different types of influential people with the maximum success rate.
I think actually reading (and taking notes) on most/all of the 7 recommended papers that you guys listed is generally a winning strategy, both for winning the contest and for winning at solving alignment in time. But only for people who can do it without forgetting that they’re making something optimal/inspirational for minimizing absurdity heuristic, not fitting as many cohesive logic statements as they can onto a single sheet of paper.
In my experience, constantly thinking about the reader (and even getting test-readers) is a pretty fail-safe way to get that right.
It sure would be nice if the best talking points were ordered by how effective they were, or ranked at all really. Categorization could also be a good idea.
These are already the top ~10%, the vast majority of the submissions aren’t included. We didn’t feel we really had enough data to accurately rank within these top 80 or so, though some are certainly better than others. Also, it really depends on the point you’re trying to make or the audience, I don’t think there really exists an objective ordering.
We did do categorization at one point, but many points fall into multiple categories and there are a lot of individual points such that we didn’t find it very useful when we had them categorized.
For those who missed it, all of the AI Safety Arguments that won the competition can be found here, randomly ordered.
If you or anyone you know is ever having any sort of difficulty explaining anything about AI safety to anyone, these arguments are a good place to look for inspiration; other people have already done most of the work for you.
But if you really want to win this current contest, I highly recommend
using bullet-pointed summaries of the 7 works stated in this post, as well asdeeply reading the instructions instead of skimming them (because this post literally tells you how to win).I’m not sure what you mean by “using bullet-pointed summaries of the 7 works stated in the post”. If you mean the past examples of good materials, I’m not sure how good of an idea that is. We don’t just want people to be rephrasings/”distillations” of single pieces of prior work.
I’m also not sure we literally tell you how to win, but yes, reading the instructions would be useful.
I meant, reading them and making bullet pointed lists of all valuable statements, in order to minimize the risk of forgetting something that could have been a valuable addition. You make a very good point that there’s pitfalls with this strategy, like having a summary of too many details when the important thing is galaxy-brain framing that will demonstrate the problem to different types of influential people with the maximum success rate.
I think actually reading (and taking notes) on most/all of the 7 recommended papers that you guys listed is generally a winning strategy, both for winning the contest and for winning at solving alignment in time. But only for people who can do it without forgetting that they’re making something optimal/inspirational for minimizing absurdity heuristic, not fitting as many cohesive logic statements as they can onto a single sheet of paper.
In my experience, constantly thinking about the reader (and even getting test-readers) is a pretty fail-safe way to get that right.
It sure would be nice if the best talking points were ordered by how effective they were, or ranked at all really. Categorization could also be a good idea.
These are already the top ~10%, the vast majority of the submissions aren’t included. We didn’t feel we really had enough data to accurately rank within these top 80 or so, though some are certainly better than others. Also, it really depends on the point you’re trying to make or the audience, I don’t think there really exists an objective ordering.
We did do categorization at one point, but many points fall into multiple categories and there are a lot of individual points such that we didn’t find it very useful when we had them categorized.