There is no problem with “Munchkinism.” The problem is that in old RPG’s the rules imply poorly designed (see lack of challenge upon full understanding of the system) tactical battle simulation games with some elements of strategy, while the advertising implies a social interaction and story-telling game without giving the necessary rules to support it. Thus different people think they’re playing different games together and social interaction devolves into what people imagine they would do given a hypothetical situation without consequences (at least until the consequences are made explicit, violating their expectations as you note in your example).
dugancm
Downvoted because, while I agree with the content of the message [1], I object to the way it was delivered, which seems to me to imply that an acceptable reaction to those who make the mistake is, “That was so stupid, I’m not even going to explain why you’re wrong. Just do what I say.” That they’re worth little enough to the community as to be acceptable targets of ridicule. If I had been publicly admonished in this way, I would feel alienated.
[1] Frivolous use of the word “rationality” and its conjugates in post titles needs to be curtailed and prevented.
Edited to clarify. (Thanks, wedrifid!) Original text follows for context, but please disregard.
Downvoted for status signalling at the expense of newcomers who can reasonably be expected to not have read A Human’s Guide to Words yet, without at least linking to an accessible explanation for those who might misinterpret the joke.
I found this person’s anecdotes and analogies helpful for thinking about self-optimization in more concrete terms than I had been previously.
A common mental model for performance is what I’ll call the “error model.” In the error model, a person’s performance of a musical piece (or performance on a test) is a perfect performance plus some random error. You can literally think of each note, or each answer, as x + c*epsilon_i, where x is the correct note/answer, and epsilon_i is a random variable, iid Gaussian or something. Better performers have a lower error rate c. Improvement is a matter of lowering your error rate. This, or something like it, is the model that underlies school grades and test scores. Your grade is based on the percent you get correct. Your performance is defined by a single continuous parameter, your accuracy.
But we could also consider the “bug model” of errors. A person taking a test or playing a piece of music is executing a program, a deterministic procedure. If your program has a bug, then you’ll get a whole class of problems wrong, consistently. Bugs, unlike error rates, can’t be quantified along a single axis as less or more severe. A bug gets everything that it affects wrong. And fixing bugs doesn’t improve your performance in a continuous fashion; you can fix a “little” bug and immediately go from getting everything wrong to everything right. You can’t really describe the accuracy of a buggy program by the percent of questions it gets right; if you ask it to do something different, it could suddenly go from 99% right to 0% right. You can only define its behavior by isolating what the bug does.
Often, I think mistakes are more like bugs than errors. My clinkers weren’t random; they were in specific places, because I had sub-optimal fingerings in those places. A kid who gets arithmetic questions wrong usually isn’t getting them wrong at random; there’s something missing in their understanding, like not getting the difference between multiplication and addition. Working generically “harder” doesn’t fix bugs (though fixing bugs does require work).
Once you start to think of mistakes as deterministic rather than random, as caused by “bugs” (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as “stupid.”
Tags like “stupid,” “bad at _”, “sloppy,” and so on, are ways of saying “You’re performing badly and I don’t know why.” Once you move it to “you’re performing badly because you have the wrong fingerings,” or “you’re performing badly because you don’t understand what a limit is,” it’s no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It’s not you, it’s the bug.
This also applies to “lazy.” Lazy just means “you’re not meeting your obligations and I don’t know why.” If it turns out that you’ve been missing appointments because you don’t keep a calendar, then you’re not intrinsically “lazy,” you were just executing the wrong procedure. And suddenly you stop wanting to call the person “lazy” when it makes more sense to say they need organizational tools.
“Lazy” and “stupid” and “bad at _” are terms about the map, not the territory. Once you understand what causes mistakes, those terms are far less informative than actually describing what’s happening.
Now that would be completely unacceptable indeed. Is, say, being on the business end of the mental health system in the worst way possible something like that? For myself, I don’t consider a life with something like that to be worth living.
So, the only reason you’re still alive is that you haven’t bothered (or been able) to verify whether you’ve forgotten thoughts you don’t remember having had? My sympathies.
Re-reading this post reminded me of Burning Wheel, a table top role playing game that’s reward system actively encourages questioning, testing and acting upon the goals and beliefs of a fictional character created by the player, but simultaneously and subversively places the character in complex situations that force the player to change those beliefs over time as a result of the conflicts they cause (and somewhat according to chance). The player has to accept that his character may become something completely alien to how it started during the course of play, yet continue to empathize with it in order to be rewarded for acting out it’s actions in the fiction.
Would (re)designing such a game around further encouraging elements of rationality be too close to Dark Arts? (Luke Crane, the game’s creator, sometimes speaks about game design as a form of mind control at the gaming conventions he frequents.)
An example of “having the child occupied by some solitary activity” from my past: Almost as soon I could walk my parents started sending me on quests to find and retrieve various items throughout grocery stores, then put them back and find another if they weren’t quite what was asked for. Wasted almost none of their time while keeping me entertained and feeling (while learning to be) useful to them in that context.
Web app idea: I’m posting this comment immediately and without editing so I don’t forget the idea before I get a chance to write it down/work it out more, as I have to leave my computer soon.
Display a short passage that illustrates something irrational that people do or think, with instructions for the reader to enter into a text box the first or most important thing that came to mind and then press a “ready” button.
Ready button reveals a question of the form, “Were your thoughts similar to any of the following?” with a list of questions/remarks you would hope a rationalist would (or wouldn’t) ask/make.
Yes/No buttons save text box and button answers, clear the text box and question fields and replace the passage with a new one.
No priming by reading questions before passages. Writing their thoughts before seeing the question will hopefully keep people honest. Saving text box with answers allows answer auditing. Each passage’s irrationality may be more or less obvious depending on a person’s background. Same with desired/undesired thinking examples with questions (that’s what we’re measuring with this though, isn’t it?).
Positive example question: Yes = +1, No = 0 Negative example question: Yes = −1, No = 0
With even split between positive/negative example questions, rationalists should score = 1⁄2 # of questions asked. More questions answered = more confidence in estimate. Wider range of topics addressed in questions = more confidence in estimate.
Edited to add: I created a storyboard for the app’s testing process here and have started a list of example passages with desired/undesired responses here.
I like how SIAI’s name references both the event you’re working toward and method of achieving it. Is there a single word that describes a watershed event that would indicate the rationality institute’s direct success like “Singularity” does an intelligence explosion? That supporters could rally around and label themselves by (singularitarian)? A word for approximating the ideal Bayesian updater, for felling akrasia, for actually changing one’s mind? Can we create or annex one?
Exaltation, Transcendence, Apotheosis, Enlightenment, Upload, Elevation, Laudation, Upgrade, Epiphanic, and Ideate come to mind, but what I’m looking for is something more like “the act (event) of becoming your best self” in a word. Too many of these have strong religious connotations for me.
Meetup : Meetup: Southwestern Ohio
When working in a textiles warehouse I would make it fun by imagining someone I’d met walking down a familiar street while showing off the shirt/hat/etc. I just sorted/tagged/profiled in a ridiculous fashion show montage, then turning to me with a smile and a wink or thumbs-up and saying, “Thanks, man!” or similar after I finished X items depending on the day’s quota. The person would then step into a crowd behind me cheering me on, who I would imagine turning around and “hi-five”-ing one at a time after arbitrary milestones to celebrate my progress.
To come up with this idea I asked myself who would be disappointed if no one in the world were willing to do any job resembling mine anymore and what would they be losing, then optimized the generated examples for salience and awesomeness.
After some thought, I hereby create Max Agency! Plucky comic superhero mascot of Zenith Agency (Z.A. Huzzah!) …for Consequential Action (Z.A.C.A.) The acronym for which happens to be Max’s battlecry, but only when shouted in triplicate of course!
Now that I have a word, the idea of an agency without agents (only aspiring agents) tickles me tremendously.
Other thoughts: Agency Institute for Rationality Training (A.I.R. Training)
Agency Foundation for Applied Rationality (A.F.A.R.)
So where can I find anecdotes about how awesome and fun it is to be saving the world through FAI research and how rewarding it is to see your work have a direct impact, so I have something vicariously available to imagine when you ask me to donate my time?
...you should NOT paint your room and lose your deposit if you are not decently-off financially.
Unless the apartment owners and managers only care what it looks like when you leave and you can afford to add a few layers of white base paint just before doing so, to avoid losing the deposit. Such policies are often clearly delineated in the lease contract, and you can sometimes negotiate leniency with the management as long as you do so in writing and have it attached to the contract pre-signature. YMMV
I was not aware of this rumor. How did you come to the conclusion it is widespread, and why do you think it’s worth taking seriously?
Yep! I and my father will be going anyway.
The game aspect is trying to get a higher “score” of hi-fives at the end of each day. Sort of like Tetris or Bejeweled where you always run out of space/time eventually, but can play again to improve your score.
Meetup report! We had a total of 4 Attendees plus a well behaved infant. Much lower than usual, but not unexpected due to scheduling issues.
Meta-meetup discussion: Nominated planner for next meetup. It has been suggested that in the future, if an organizer/presenter cannot make it to their meetup, that it not be postponed unless at least 3-7 days notice can be given (since not everyone checks their email, facebook and/or the Less Wrong posting daily).
Presentation: Skipped in favor of scouting the area as a location for future meetups. While I’m currently re-working the whole thing (found some critical flaws), it should be ready again by the meetup after next if there is interest.
Location Impressions: While the Wine Loft’s menu isn’t designed for eating a full meal, the indoor seating and ambiance are great for running group exercises or just being social. Very lounge like, with couches, cushions and easily moveable, low-to-the-ground tables. The outdoor seating is a little cramped and loud for my tastes, as it’s small and adjacent to the main thoroughfare off the expressway, but well shaded and cool this time of year. They’re also mostly empty on Sundays prior to 9 p.m., so we should be able to conquer a nook fairly easily even without a reservation; and that shouldn’t be a problem as long as our expected group size doesn’t fall below 6.
The Greene itself is pleasant to walk through, with wide sidewalks, lampposts, outdoor cafe’s, wall art, and non-repeating architecture. There is a small patch of greenery in the center which hosts events, some of them musical. There is a Books & Co. just across the alley from the Wine Loft; a spacious, two-story bookstore with podium and seating for a presentation area should we decide to run events for the public (such as educational material for CFAR) or start an ancillary Less Wrong book group. There is also a Funny Bone comedy club nearby that has shows every Sunday at 7 p.m., though I don’t know how good the performers are.
Food choices in the area tend toward the upscale, but Choe’s Asian Gourmet seems the most promising in terms of both price and the menu preferences of which I’ve been made aware. For future meetups I’d recommend having dinner there, then migrating to the Wine Loft for drinks, planning and rationality games and exercises.
Though we missed many of our regulars, a good time was had and much data gathered!
For those who’ve never used a command line interface and find them intimidating (one of my hurdles on the way to learning to program), I’d recommend Learn Code the Hard Way: The Command Line Crash Course. The exercises are designed to trip you up and force you to figure some things out for yourself, which has quickly increased my confidence and self-reliance so far.
I have not finished the book, but am already getting slightly addicted to “commanding” my computer to do my bidding instead of having to dig my way through windows explorer and context menus to get anything done. Am I right in thinking this may be good prep for migrating to linux?
Is the temporary amusement of some at the sniping of those others’ status worth potentially alienating them from the community, even if they number less than “most”? I do not want such “ridicule of the less socially experienced and/or quick to read sequences” norms to become prevalent here.
Took the entire survey and all extra credit questions in one go; minus ACT, SAT2400 and Respectable(tm) IQ scores since I don’t have them, and </=140 character LW description because I was starting to get tired after the 40 min. IQ test.
So much fun! I’m very curious to see the results.