• TurnTrout 27 Jun 2022 18:34 UTC
    77 points
    0

    Rationality exercise: Take a set of Wikipedia articles on topics which trainees are somewhat familiar with, and then randomly select a small number of claims to negate (negating the immediate context as well, so that you can’t just syntactically discover which claims were negated).

    For example:

    By the time they are born, infants can recognize and have a preference for their mother’s voice suggesting some prenatal development of auditory perception.

    -> modified to

    Contrary to early theories, newborn infants are not particularly adept at picking out their mother’s voice from other voices. This suggests the absence of prenatal development of auditory perception.

    Sometimes, trainees will be given a totally unmodified article. For brevity, the articles can be trimmed of irrelevant sections.

    Benefits:

    • Addressing key rationality skills. Noticing confusion; being more confused by fiction than fact; actually checking claims against your models of the world.

      • If you fail, either the article wasn’t negated skillfully (“5 people died in 2021” → “4 people died in 2021″ is not the right kind of modification), you don’t have good models of the domain, or you didn’t pay enough attention to your confusion.

      • Either of the last two are good to learn.

    • Scalable across participants. Many people can learn from each modified article.

    • Scalable across time. Once a modified article has been produced, it can be used repeatedly.

    • Crowdsourcable. You can put out a bounty for good negated articles, run them in a few control groups, and then pay based on some function of how good the article was. Unlike original alignment research or CFAR technique mentoring, article negation requires skills more likely to be present outside of Rationalist circles.

    I think the key challenge is that the writer must be able to match the style, jargon, and flow of the selected articles.

    • Look­ing back on my al­ign­ment PhD by TurnTrout (1 Jul 2022 3:19 UTC; 332 points)
    • Brief notes on the Wikipe­dia game by Olli Järviniemi (14 Jul 2024 2:28 UTC; 68 points)
    • Bayesian up­dat­ing in real life is mostly about un­der­stand­ing your hypotheses by Max H (1 Jan 2024 0:10 UTC; 63 points)
    • Morpheus 28 Jun 2022 19:30 UTC
      5 points
      Parent

      I remember the magazine I read as a kid (Geolino) had a section like this (something like 7 news stories from around the World and one is wrong). It’s german only, though I’d guess a similar thing to exist in english media?

    • TurnTrout 6 Oct 2022 0:14 UTC
      3 points
      0
      Parent

      Additional exercise: Condition on something ridiculous (like apes having been continuously alive for the past billion years), in addition to your own observations (your life as you’ve lived it). What must now be true about the world? What parts of your understanding of reality are now suspect?

    • Yitz 28 Jun 2022 0:25 UTC
      3 points
      Parent

      This is a lot like Gwern’s idea for a fake science journal club, right? This sounds a lot easier to do though, and might seriously be worth trying to implement.