I find myself, for the first time in a while, with enough energy and stability to attempt nontrivial projects outside my dayjob. Regarding the next ~10 months, I’ve narrowed my options to two general approaches; as expected beneficiaries of both, I’d like the LessWrong hivemind’s help choosing between them.
The first option is making more D&D.Sci Scenarios, running them on a more consistent schedule, crossposting them to more platforms, and getting more adventurous about their form and content. The second is creating Epistemic Roguelikes, a new[1] genre of rationalist videogame about deducing and applying the newly-randomized ruleset each run.
Prima facie, prioritizing D&D.Sci this year (and leaving more speculative aspirations to be done next year if at all) seems like the obvious move, since:
D&D.Sci projects are shorter and more self-contained than game projects, and I have a better track record with them.
At time of writing, D&D.Scis can still flummox conventionally-applied conventional AIs[2]. Open opportunities for robots, humans and centaurs to test their mettle would be a helpful (if infuriatingly low-N) sanity check on other metrics.
This time next year, a data-centric challenge hard enough to mess with AIs but toyish enough to be fun for humans could be an oxymoron; if I want to apply my backlog of scenario ideas, it might be now-or-never[3].
Conversely, if AI capabilities do stay at about this level for a while, publicly and repeatedly demonstrating that I can make good AI-proof test tasks may end up being really good for my career.
However:
Content creation is in general a long-tailed domain. I’ve been making D&D.Scis for half a decade now, and while it’s been fun, it hasn’t led to runaway success. Trying other things – on the off-chance they do lead to runaway success – seems warranted.
It turns out I’m actually a pretty good writer. D&D.Sci leans on that skill only lightly; the game(s) I’m interested in would make much more intensive use of it.
Three of the four points in favor center on AI; having plans involving short-term frontier AI progress inherently makes them much less stable and much more nerve-wracking.
I really enjoyed inventing a genre and I’d like to do that again.
Any thoughts would be appreciated.
- ^
As far as I know; please prove me wrong!
- ^
I tried a handful of them on chatgpt-thinking; tough straightforward ones like the original were handled better than the average human player at the time, but easy tricky ones like these two were fumbled.
- ^
I’m pretty bearish on AI by LW standards, so I actually don’t think this is likely, but the possibility perturbs me.
Here’s my vote for Epistemic Roguelikes. It seems like a riskier path, but with a lot more upside.
The usual answer to this question is to do whichever one you’re personally the most excited to work on. If the question of what LW people would like happens to be relevant to that, I will also chime in alongside Drake Morrison that epistemic rougelikes sound really cool.
I vote that you work on horizon modelling and automated risk model updates with your pal technicalities
Understand is a puzzle game that is basically what you describe as an Epistemic Roguelike, in that you deduce a different ruleset for every set of levels. It’s not an actual roguelike though, purely focused on the puzzle of figuring out rules, which are limited to a simple grid with shapes.
I would say that as something exploring a relatively unplumbed space of content, Epistemic Roguelikes are more likely to be interesting in a time where AI can make average copies of any existing content, and mainly struggles with new concepts.
Correspondingly, I think that D&D Sci might be less useful now that you could plausibly automate a large chunk of the process of creating scenarios? That’s my impression based on checking out the posts, although I haven’t actually completed one.