Thanks!
malthrin
Are you posting about this here looking for input/ideas, or simply as a case study of >what Eliezer described?
What kind of answers are being given to “is this the community we meant to create”?
Primarily as a case study, though input is certainly welcome.
There’s division in the moderation staff about the site should develop. Some feel that we should work on being more approachable to people who want to learn what to do without learning why—concise and easily-found guides, user-friendly models, etc. Others prefer the status quo and would rather improve information sharing across different models to reduce wasted effort in mechanics testing. The first group is hoping to encourage discussion of subjective topics by attracting new posters while the second group is fine with finding those discussions on other sites.
I fear I’ve fallen into the historian’s trap of implying intentionality in the course of presenting a selection of events as a narrative. Your underlying assertion is that we did a poor job planning our application architecture in advance of the grand project of modeling WoW; the reality is that we didn’t know we had undertaken such a project until we were in the middle of it, until the community consensus had emerged that Elitist Jerks is where the theorycrafting happens.
A good comparison is open-source software. There’s no editorial control preventing someone from developing a piece of software for their own use, written in whatever language and idioms suit them best. If the author then chooses to share this tool with the community, do we turn it away because it didn’t follow the specifications for an existing modeling platform? There are at least 3, in C++, C#, and Python. Perhaps if the EJ administration had thrown its weight behind one of them, we’d have the standard platform you advocate—or perhaps we would have splintered our community.
Going back to the meta level, NancyLebovitz touched on one point that I was hoping to make in http://lesswrong.com/r/discussion/lw/5gg/entropy_and_social_groups/ - trading one kind of community equilibrium for a different kind, with its own advantages and disadvantages, through consistent application of rules. The more general point is the difficulty of predicting any specific outcome when it comes to group action.
Yes, I am prepared to negotiate on compensation. I attempted to negotiate a larger raise at my current position during my last review and convinced my superior, but he was overruled by HR. While I failed in that instance, I’m now more confident about arguing my side of salary negotiations.
My situation is somewhat complicated by being represented by recruiters for all of these opportunities. My current thought is that getting two+ offers through the same recruiter would be ideal; he’ll be incentivized to argue my case on compensation much more than if there was only a single offer that he needed me to accept.
Recruiters spend a significant amount of time combing LinkedIn and various job sites for good candidates—they’re already looking for you, the trick is to convince them that the effort/reward in placing you is good. So, things to do:
0) Have a decent resume. It should be easy for the recruiter to see a) your skills and b) what sets you apart from the crowd. 1) Post your resume on industry-specific sites and update your skills on LinkedIn. Make sure you can be contacted via these sites. 2) Respond promptly to initial communications. This is big; if you’re difficult to contact, that effort term goes way up. 3) Represent yourself well on the phone call. The content is important, but so is the delivery; the recruiter is gauging how well you’ll perform during interviews.
If you make it that far, you will probably be invited to meet your recruiter face-to-face. This is mostly a formality to make sure that you are punctual and that you don’t smell bad.
I started job hunting in earnest two weeks ago. I’ve spoken with 5 recruiters on the phone, met with 3 in person, and turned another 6 or 7 away because I felt like I should give the first set time to work.
Software.
Could have been either or both.
Taboo ‘AI’ in your question. Are you looking for: A) A self-modifying structure that contains an internal representation of the surrounding gameboard and a planning algorithm that uses that representation to achieve goals according to some utility function. B) A structure that can pass the Turing test with questions and answers encoded in the game space? C) Something else?
I suspect you meant A. Rephrasing your question gives you an idea what subquestions to look into, such as the degree to which a contained internal representation is possible given the rules of the game and the size of the board.
Sense of unproductivity is a good flag for unbundling goals. I recently tried to figure out why I haven’t finished as many free-time programming projects as I used to, and realized that I had at least 4 goals for free-time programming: learn a new language, build something personally useful, build something other people will use, and apply techniques from a textbook I’ve been working through. I couldn’t find a project that satisfied all my goals, so I was skipping back and forth and not finishing anything.
I think someone read your article: http://www.theatlantic.com/magazine/print/2011/07/the-brain-on-trial/8520/
He comes at it from a slightly different angle—the criminal justice system—but approaches it the same way, dissolving the question down to blameworthiness and free will. He also reaches the same conclusion; our reaction as a society should be based on influencing future outcomes, not punishing past actions.
Rationality is a method for answering questions, not an answer itself. If you don’t have any pressing questions—in other words, you’re happy and content—you may not see much use for it yet.
When I first finished reading the sequences, I thought, “Great! Now I’ll go through my beliefs and fix all the stupid ones! Okay, what do I believe that’s wrong?” My reply: ”...” Obviously, it’s not that simple—if I knew it was wrong, I wouldn’t have believed it in the first place. I could have tried to reevaluate everything I believe from the ground up, but that sounded like a poor effort:reward task. I suspect you feel the same way.
So what am I getting out of Bayesian rationality, the study of biases, and the Less Wrong community?
A better understanding of my own motivations. For example: My job hunt post, Motivated Stopping.
A collection of effective life-hacks and a community dedicated to finding and sharing more. Examples: Learn from Textbooks, rejection therapy, Defeating Ugh fields.
A strategy for attacking questions that I really don’t know the answer to. Examples: What can my parents do to take care of their surviving elders without totally sacrificing their financial and mental health? What can I do to help my autistic, college drop-out younger brother? What should my wife and I do about her house in Florida that’s been on the market for nearly a year?
In addition to all that, I’m updating my beliefs in place. When I learn something that surprises me, I take a closer look at why I believe what I believe, looking for an unfounded assumption that lead to the current error. That’s what I suggest for you: don’t expect what you’ve learned here to rewrite your entire worldview, but keep it handy for the next time life asks a Hard Question or throws you an utterly unanticipated datum.
But if we’re already committed to the reductionist understanding of free will in the first place, what does this intuition that Charles and Alex are somehow “less free” really mean?
Glib answer: it means your intuition is faulty.
More serious answer: make a testable prediction. What does it look like when someone is “less free”, given the reinterpretation of “free will” as “a planning algorithm based on a ‘normal’ preference ranking of outcomes”? We may just be hiding the question inside the word ‘normal’ there, but let’s run with it.
Here’s an example prediction: someone who’s “less free” is not susceptible to persuasion. In a standard H. sapiens, strong social pressure can dramatically reorganize a preference ranking. However, I wouldn’t expect persuasion to have much effect in these tumor cases.
My prediction has some obvious holes in it. For example, cryonics advocates defy majority opinion because they’re convinced that they’re correct and the issue is that important. What I’m trying to convey is the technique—if you think a category boundary exists, but you’re not sure precisely where to draw it, put your finger on the page and try to feel the contours of the problem.
One memetic virulence strategy operates by making outlandish promises that subscribing to it will make you smarter, richer, more successful, more attractive to the opposite sex, and just plain superior to other people—and then doing it in a way that can’t obviously be proven wrong.
That similarity is the key to both the perceived creepiness factor and the signal:noise ratio on this site. Groups formed to provide a service have performance standards that their members must achieve and maintain: drama clubs and sports teams have tryouts, jobs have interviews, schools have GPA requirements, etc. By contrast, groups serving as vehicles for contagious memes avoid standards. Every believer, even if personally useless to the stated aims of the group, is a potential transmission vector.
I see two reasons to care which of those classes of groups LW more closely resembles: first, to be aware of how we’re coming across to others; and second, as a measure of whether anything is actually being accomplished here.
Personally, I try to avoid packaging LW’s community and content into an indivisible bundle. From Resist the Happy Death Spiral:
To summarize, you do avoid a Happy Death Spiral by (1) splitting the Great Idea into parts (2) treating every additional detail as burdensome (3) thinking about the specifics of the causal chain instead of the good or bad feelings (4) not rehearsing evidence (5) not adding happiness from claims that “you can’t prove are wrong”; but not by (6) refusing to admire anything too much (7) conducting a biased search for negative points until you feel unhappy again (8) forcibly shoving an idea into a safe box.
There are a great many insightful posts on LW, mostly from Eliezer, Yvain, and a few others. There are other posts that are less specific and of correspondingly smaller insight. There is also a community centered in the discussion section that spends most of its time espousing the beliefs in the main post. Rather than allowing all these ideas to prop each other up, I’m content to wield the supported and useful techniques and discard the rest.
It seems you’re not the only one who recently read up on prizes. Google just launched Prizes.
I’ve been told that the main impediment to speed-reading is subvocalization—pronouncing the words in your head, as you read them. If you can stop subvocalizing, you’re capable of extracting concepts from text much faster.
Learning to speed-read might be a useful step towards your goal. Apply the usual second-hand disclaimers: I haven’t tried this myself.
Group status, 1-on-1 status, mood—all that mammalian stuff. Most of that comes through side channels: posture, tone, eye movements. As for equivalent side-channels in online communications, compare:
u can say alot of things wihout words rite?? :)
I’m sure you’ve already thought of this, but have you considered the information content of syntax as well as semantics? Sorry to bother you.
Augh, RTFM you n00b, don’t waste our time with this BS. Punctuation, tone, and w0rd ch01c3 are covered on the wiki for online chat.
You can put a chimp in front of a keyboard, but it’s still a chimp.
What I find fascinating are the different levels of subtext-literacy I encounter online. Everyone is hard-wired to internalize their culture’s body language model at a young age, but exposure to the internet equivalent varies widely. I imagine that being internet-communication-illiterate is something like visiting a foreign country—you can understand and make yourself understood, but you stick out like a sore thumb and are blind to every kind of subtlety.
Is there something wrong with the Mersenne Twister?
What do you want to do with your new ‘rationality’? Choose a problem and then the tools you’ll need to solve it. Don’t be a tool looking for a problem.
I’ve been working through Russell and Norvig’s AI textbook—reading and implementing key algorithms in F# - but I’ve recently gotten derailed due to a multiweek vacation. I’ll get back into it this month.
I’d be interested. I live in Raleigh and work standard hours. Could do weekend or possibly weeknight, depending on the timing. If a weekend, Easter weekend is out—I’m getting married.