Relatedly, I’ve been thinking about building a schedule-this-post-for-publication feature. If I publish a post at 10pm, it’s often better to publish the next morning for visibility. My guess is this would be useful for Inkhaven Residents who finish writing near-midnight.
If I could schedule, the frontpage review happened before publishing, and the schedule UI had “delay publishing until frontpage”[1] as a checkbox, this would be ~solved.
I’d prefer this to “delay publishing until human review”, as ~half a dozen times in the past few years I’ve appealed via Intercom and had a human-reviewed page retroactively frontpaged (usually a resource, which LW team’s priors seem to be something like ‘this won’t be maintained’ but will because I optimize a bunch for not leaving stale projects).
Relatedly, I’ve been thinking about building a schedule-this-post-for-publication feature. If I publish a post at 10pm, it’s often better to publish the next morning for visibility. My guess is this would be useful for Inkhaven Residents who finish writing near-midnight.
If I could schedule, the frontpage review happened before publishing, and the schedule UI had “delay publishing until frontpage”[1] as a checkbox, this would be ~solved.
I’d prefer this to “delay publishing until human review”, as ~half a dozen times in the past few years I’ve appealed via Intercom and had a human-reviewed page retroactively frontpaged (usually a resource, which LW team’s priors seem to be something like ‘this won’t be maintained’ but will because I optimize a bunch for not leaving stale projects).
Examples which Rafe requested when I mentioned this: the following were all marked as personal blog until I intercom’d in and asked for a re-assessment
https://www.lesswrong.com/posts/JsqPftLgvHLL4Pscg/new-weekly-newsletter-for-ai-safety-events-and-training
https://www.lesswrong.com/posts/dEnKkYmFhXaukizWW/aisafety-community-a-living-document-of-ai-safety
https://www.lesswrong.com/posts/vxSGDLGRtfcf6FWBg/top-ai-safety-newsletters-books-podcasts-etc-new-aisafety (nudge didn’t work for this one)
https://www.lesswrong.com/posts/MKvtmNGCtwNqc44qm/announcing-aisafety-training
https://www.lesswrong.com/posts/JRtARkng9JJt77G2o/ai-safety-memes-wiki
https://www.lesswrong.com/posts/x85YnN8kzmpdjmGWg/14-ai-safety-advisors-you-can-speak-to-new-aisafety-com