EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research

    Hosted in the EA Gather Town. Subscribe to our calendar to be notified of new events.

    This is one of hopefwly a series of casual talks and discussions self-hosted by anyone wanting to use the virtual community platform to contribute. Just get in touch with us and we can make an event post on the forum and add it to our calendar.

    (Let me know if you’re coming by RSVPing)

    Topic

    I’m going to talk abstractly about how theoretically optimal research strategies are narrowly fine-tuned on two separate dimensions, and discuss what practical implications this has for how to actually do pre-paradigmatic research.

    Depending on how many show up, this can either be an informal back-and-forth, or a talk in one of the auditoriums. The talk itself will be around 35 minutes with optional discussion afterwards, and you can write collaborative notes during or after the talk.

    I’ll briefly try to explain three main ideas/​claims:

    1. How compromising between unfactorable optimisation criteria costs more than people realise.[1]

    2. How deviating from an optimal balance between forward- and backward-chaining (“chaining balance”) can be exponentially inefficient.

    3. Finally, how insufficient financial slack almost force you to be exponentially inefficient if you want to keep your job. I have no obvious solutions, I’m just pointing at what I believe is a big problem.

    (I don’t expect the above to make sense yet, but that’s what the talk is for. They’re simple ideas under the jargon.)

    The feeling I hope to leave people with is that there is unexplored territory here, and pre-paradigmatic goal-driven research can be made much more efficient by stepping back and doing more meta-work first. I think investing in figuring out how to figure things out still has steep marginal returns beyond the frontier of the rationalist paradigm.

    But, that’s just me babbling, and I hope someone can help me figure out which branches to prune, if any. Think of this as me exposing my honestly held models to criticism and inviting people to correct my course. : )

    1. ^

      H/​T Steven Byrnes for help clarifying this.

    No comments.