I’m Screwtape, also known as Skyler. I’m an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I’m fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you’re ever in the Boston area, feel free to say hi.
Starting early in 2023, I’m the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
This is not quite true, though I do think treating their words as containing ~zero information is the correct first pass approach. There does exist a trickier response where their words give you some evidence for what they want you to believe, what they want others to have heard them say, or that the kind of person they imagine themselves to be would say that. Zvi’s Simulacra Levels may be useful here.
Parsing people’s statements this way leaves me open to going off the rails and doing galaxy brained apophenia takes. I do think I have sometimes derived useful information from “They’re saying X. I don’t know if X is true or false. I do now know they want to have said X. What would they gain from having done that?” I try to be pretty precise in what I think I know and why I think I know that when trying moves like this.
(For a blatant and fictional example, imagine that you are a member of the tyrannical regime’s enforcers, and Bella, who you happen to know lies a lot, comes to you and says her boss is secretly an enemy sympathizer. Without any idea if that’s true or false, you now have a good guess that Bella would like bad things to happen to her boss. Their words had no relation to the object level truth, but still had information for you.)