Everything is reducible—to Eliezer Yudkowsky.
Scientists only wear lab coats because Eliezer Yudkowsky has yet to be seen wearing a clown suit.
Algorithms want to know how Eliezer Yudkowsky feels from the inside.
Everything is reducible—to Eliezer Yudkowsky.
Scientists only wear lab coats because Eliezer Yudkowsky has yet to be seen wearing a clown suit.
Algorithms want to know how Eliezer Yudkowsky feels from the inside.
The list doesn’t include anything in the way of game theory, social choice, or mechanism design, which is going to be crucial for an AI that interacts with other agents or tries to aggregate preferences.
Relevant book recommendations (all available at links as pdfs):
Essentials of Game Theory by Leyton-Brown and Shoham. (Gated, might be accessible through a university connection, otherwise easily searchable)
A Course in Game Theory by Osbourne and Rubinstein (requires free registration)
Multi-agent Systems: Algorithmic, Game-Theoretic, and Logical Foundations by Shoham and Leyton-Brown.
Algorithmic Game Theory edited by Nisan, Roughgarden, Tardis, and Varizani.
I’m doing mechanism design for eliciting information without money. Most people here are aware of scoring rules and prediction markets, which reward participants according to the accuracy of their predictions. Drazen Prelec’s Bayesian truth serum (BTS) is an alternate mechanism that rewards predictions relative to the answers of others instead of the actual event. Since verification is done internally, the mechanism works for questions that would be difficult or impossible to evaluate on a prediction market, e.g. “Will super-human AI be built in the next 100 years?” or “Which of these ten novels was the most innovative and ground-breaking?”.
All three types of mechanisms assume the participants want to maximize their score from the mechanism. In many circumstances though, people care much more about influencing the outcome of the mechanism than their score or payment. Consider a committee making a high stakes decision, like whether to fire an executive officer. Paying committee members based on their predictions would be gauche. Scores could be ignored if it meant getting a favored outcome, so BTS is easily manipulated without money. The usual fallback of majority vote is non-manipulable, but can fail to uncover the correct answer if participants are biased. BTS outputs the right answer with enough participants, even with bias. To ensure truth telling in Nash equilibrium, BTS does depends on participants having a common prior, although the mechanism operator doesn’t have to know what it is.
So far, I have mechanisms that encourage honesty without money, don’t depend on a common prior or specific belief formation processes, and capture ~80% of the potential gains over majority vote in simulations. The operation of the mechanism is fairly straightforward, although why is works is another question. I’m still trying to grasp what makes one mechanism estimate the state better than another, what the optimal mechanism is, or whether an optimal mechanism even exists given my constraints.
My primary focus is writing this up. At some point, I want to deploy a web app for polls on LW. I suspect this would be trivial for someone with actual development experience. I’m open to collaboration on the econ/stats or development side, so PM me if interested.
As an ex-Mormon, I had to personally confront this issue. My family, extended family, friends, neighbors, and the large majority of my hometown are Mormon, so the social costs of leaving my church were extremely high. While in high school, I was primarily in the closet, but I’d express the occasional doubt. Just the suggestion that the church could be tested against evidence resulted in people avoiding conversation with me, my now-wife being warned by mutual friends not to date me, and my parents sternly lecturing me. Note this was merely because I considered the possibility of contrary evidence, not a public expression of disbelief.
In the counterfactual world where I chose not to explore the veracity of religion, my high school years would have been significantly happier, I would have avoided prolonged conflict with my family, I would have served a two-year religious mission, and I would likely be attending BYU right now. In some ways, it does genuinely feel like this would have been better, but I can say with confidence that I made the right choice.
I could easily pick out reasons why someone shouldn’t remain Mormon specifically, but I want to engage the least convenient world for why we shouldn’t knowingly believe something false. Being a theist might not affect the quality of someone’s everyday life much, so there is not an apparent gain from a belief in the truth. But similarly, beliefs about the moon landing, Santa, evolution, heliocentricism, etc rarely influence someone’s everyday life. The problem is that once you allow exceptions to seeking evidence, allowing your beliefs to be influenced by evidence, and not starting with a bottom line, the exceptions start bleeding over into beliefs that do affect success. I don’t think this slippery slope is inevitable, but if you want to win, you can’t trust partitions*.
I absolutely agree that if Wednesday came to our community interested and enthusiastic, we should welcome her with open arms. Nevertheless, I would encourage her to break down any mental partitions she might have, otherwise simply note that theism is not up for discussion in the context of this site.
* This is particularly true of Mormon culture where “I prayed about it, and felt the Spirit tell me it is right” can trump any other argument.
This is one of my favorite posts yet, but I’m not sure I understand your full chain of reasoning. I understand you to be arguing that we should only be affected by the denotational content of a statement, and ignore connotations as best we can. I entirely agree we shouldn’t confuse the two, but I don’t see how to go from that to your full conclusion. Is the danger of confusion so great it is worth giving up the extra expressiveness of connotation? I’d appreciate some clarification.
I really like the idea of an acronym, but I’d like one that can be used naturally as a verb. My best shot is “agree denotationally but oject connotatively”, e.g I adboc that the rich party while the poor starve.
I’m also somewhat confused by this. I love HPMoR and actively recommend it to friends, but to the extent Eliezer’s April Fools’ confession can be taken literally, characterizing it as “you-don’t-have-a-word genre” and coming from “an entirely different literary tradition” seems a stretch.
Some hypotheses:
Baseline expectations for Harry Potter fanfic are so low that when it turns out well, it seems much more stunning than it does relative to a broader reference class of fiction.
Didactic fiction is nothing new, but high quality didactic fiction is an incredibly impressive accomplishment.
The scientific content happens to align incredibly well with some readers’ interests, making it genre-breaking in the same way The Hunt for Red October was for technical details of submarines. If you are into that specific field, it feels world-shatteringly good. For puns about hydras and ordinals, HPMoR is the only game in town, but that’s ultimately a sparse audience.
There is a genuine gap in fiction that is both light-hearted and serious in places which Eliezer managed to fill. Pratchett is funny and can make great satirical points, but doesn’t have the same dramatic tension. Works that otherwise get the dramatic stakes right tend to steer clear of being light-hearted and inspirational. HPMoR is genre-breaking for roughly the same reasons Adventure Time gets the same accolades.
Ugh. The prize was first and foremost in recognition of Fama, Shiller, and Hansen’s empiricism in finance. In the sixties, Fama proposed a model of efficient markets, and it held up to testing. Later, both Fama, Shiller, and Hansen showed further tests didn’t hold up. Their mutual conclusion: the efficient market hypothesis is mostly right, and while there is no short-term predictability based on publicly available information, there is some long-term predictability. Since the result is fairly messy, Fama and Shiller have differences about what they emphasize (and are both over-rhetorical in their emphasis). Does “mostly right” mean false or basically true?
What’s causing the remaining lack of agreement, especially over bubbles? Lack of data. Shiller thinks bubbles exist, but are rare enough he can’t solidly establish them, while Fama is unconvinced. Fama and Shiller have done path-breaking scientific work, even if the story about asset price fluctuation isn’t 100% settled.
This sounds like a map/territory confusion. “Intelligence” is a concept in the map, used to summarize the common correlations in success across domains. There is no assumption that fully general cross-domain optimizers exist; it’s an empirical observation that most of the variance in performance across cognitive tasks happens along a single dimension). Contrast this with personality, where most of the variance is along five dimensions. We could talk about how each person reacts in each possible situation or “island”, but most of this information can be compressed into five numbers.
We could always drill down and talk about more factors, ie fluid vs crystallized intelligence or math vs verbal. More factors gives us more predictive power, though additional factors are increasingly less useful when chosen well.
Though a single-factor model works well for humans, this isn’t necessarily the case for more general minds. I suspect the broad concept of intelligence carves reality at its joints fairly well, but assuming so would be a mistake.
This sequence still feels like it is privileging the hypothesis of the desirability of LDS organizational practices, but you make a good point. We lack condensed introductions. Eliezer handed out copies of the Twelve Virtues of Rationality in the past, but I don’t remember any other attempts at pamphlets or booklets. How much can the material be compressed with reasonable fidelity?
Some possible booklet ideas:
You Are A Brain: map/territory distinction, mind projection fallacy, reductionism, mysterious answers
Short guide to cognitive biases: examples of biases that can be directly illustrated and are less likely to be turned against others, like hindsight bias or the Wason selection task.
Checklists: e.g. Polya and Adam Savage on problem solving, Personal MBA questions to improve results
Sequence overviews: Scientific Self-help, 37 ways words can be wrong
Techniques/Heuristics: noticing confusion, holding off solutions, consider-the-opposite, tracking/data collection, being specific, and perhaps touching on deliberative vs automatic cognition or Haidt’s elephant and rider metaphor
You are already living with the truth: Litany of Gendlin (plausibly the most important thing beginning rationalists can hear), On being ok with the truth
Identity and rationality: Keeping your identity small, cached selves, entropic nature of organizations, social vs individual rationality
One more hypothesis after reading other comments:
HPMoR is a new genre where every major character either has no character flaws or is capable of rapid growth. In other words, the diametric opposite of Hamlet, Anna Karenina, or The Corrections. Rather than “rationalist fiction”, a better term would be “paragon fiction”. Characters have rich and conflicting motives so life isn’t a walk in the park despite their strengths. Still everyone acts completely unrealistically relative to life-as-we-know-it by never doing something dumb or against their interests. Virtues aren’t merely labels and obstacles don’t automatically dissolve, so readers could learn to emulate these paragons through observation.
This actually does seem at odds with the western canon, and off-hand I can’t think of anything else that might be described in this way. Perhaps something like Hikaru No Go? Though I haven’t read them, maybe Walter Jon Williams’ Aristoi or Ian Banks’ Culture series?
P-zombies gain qualia after being in the presence of Eliezer Yudkowsky.
Only slightly facetiously, why aren’t you studying to be an archeologist or geneticist then? If in your judgment there is a substantial gap in scientific knowledge and it isn’t being filled for whatever reasons, why aren’t you pursuing it?
I don’t think the animal or plant life claims are that important. Maybe they were evidence against before, but with new discoveries, their mention is neutral. It’s not like Smith was consciously defying an establishment when he said there was barley in the Americas. I’m also willing to accept that God or Smith might have taken license in translating these terms. The question of whether or not the Nephites had horses pales in comparison to the implication that modern genetics is wrong.
The basic claim of the Book of Mormon is that Jews settled in the Americas, established a fairly large civilization, and most Amerinds are partially descended from them. It’s not like these are disputed, minority positions in academia; they aren’t even on the radar.
Hmm, wow. This poses all sorts of interesting questions. How should this make me update on
Beck’s overall rationality?
Beck’s overall sincerity and what his agenda actually is?
Whether the Singularity/existential risk is being taken more seriously by a mainstream audience?
Whether this is good or bad press for SI?
What are your goals? What are your constraints? This is off-topic, but without those, we can’t give much advice anyways.
EDIT: Here is some generic advice.
Buy a copy of the Princeton Companion to Mathematics. It will serve you well. In particular, it will give you a good global understanding of math and how topics inter-relate. If you don’t know what you are interested in, this book is a good place to learn.
If you want a technical understanding, buy a couple Dover books on vital topics like algebra or analysis. Work through them a page or two at a time, checking you remember definitions and theorems after you read. Do the problems. Not every single one, but enough that the material is actually sinking in. Fraliegh’s First course in abstract algebra happens to have exemplary exercises.
When taking classes, the professor matters more for how interesting it is than the subject. Unless you need that particular subject, find out who is engaging and motivates their material well.
As a convert, you apparently experienced a major shift in belief, especially since you committed to a mission soon after. What in particular changed your mind?
What is your perspective on the role of faith in belief? How much of your belief would you say is due to feelings attributed to the Holy Ghost vs weighing other evidence?
What would be evidence to substantially revise your belief in the church downwards?
What have you thought of your reception here? Have you been surprised by any reactions?
What are you studying at Stanford?
I’m particularly interested in what you have to say as a convert. I know how the process works in the other direction (leaving the church at 17), but it’s important to know why people change their minds in general.
ETA: After looking at your blog, I’ll be frank and admit I was hoping for something a little more sophisticated to engage with. Your conversion appears to be based on a feeling of rightness without really grappling why or why not it might be true. Since learning the technical meaning of evidence, I no longer dismiss “feeling the Spirit” completely. Spiritual experiences are more likely if religion is true than if it is not, but not significantly more so. Hence it’s very weak evidence, nowhere necessary to overcome even basic evidence against.
LDS theology does have a veneer of rationality, saying “the glory of God is intelligence”, encouraging learning, and claiming there are universal laws that God works within, but the substance isn’t there. In your post on reading Dialogue, you acknowledge there are issues, but seem satisfied with acknowledgement rather than tracing out their implications.
I don’t want to hold you to blog posts that are a couple years old though. I’m still eager to learn any insights you might have. Please stick around. However, (speaking to everyone else here) I’m remembering how direct discussion of religion isn’t productive, even as a case study about how thinking can go awry. The mistakes you are making are too basic to be relevant to most people here. Thanks for opening yourself up to questions, but people (including myself) have been too eager in asking.
Also, economics and math! Always nice to meet another member of the tribe.
I’d go with: Probability exists in your mind, not the world, but there still is an “objective” way to calculate it.
We’re descended from the indignant, passionate tellers of half truths who in order to convince others, simultaneously convinved themselves. Over generations success had winnowed us out, and with success came our defect, carved deep in the genes like ruts in a cart track—when it didn’t suit us we couldn’t agree on what was in front of us. Believing is seeing. That’s why there are divorces, border disputes and wars, and why this statue of the Virgin Mary weeps blood and that one of Ganesh drinks milk. And that was why metaphysics and scince were such courageous enterprises, such startling inventions, bigger than the wheel, bigger than agriculture, human artifacts set right against the grain of human nature.
-- Ian McEwan, Enduring Love (1998, p. 181)
Reason means truth, and those who are not governed by it take the chance that someday a sunken fact will rip the bottom out of their boat.
-- Oliver Wendell Holmes, Jr
The greatest enemy of truth is very often not the lie—deliberate, contrived, and dishonest—but the myth—persistent, pervasive, and unrealistic.
-- John F. Kennedy
(For those interested, I’m pulling most of these quotes from Rational Choice in an Uncertain World by Robyn Dawes, which I just began)
Teachers try to guess Eliezer Yudkowsky’s password.