Projects-in-Progress Thread
From Raemon’s Project Hufflepuff thread, I thought it might be helpful for there to be a periodical check-in thread where people can post about their projects in the spirit of cooperation. This is the first one. If it goes well, maybe we can make it a monthly thing.
If you’re looking for a quick proofread, trial users, a reference to a person in a specific field, or something else related to a project-in-progress this is the place to put it. Otherwise, if you think you’re working on something cool the community might like to hear about, I guess it goes here too.
https://docs.google.com/spreadsheets/d/1Xh5DuV3XNqLQ4Vv8ceIc7IDmK9Hvb46-ZMoifaFwgoY/edit#gid=0
https://wiki.lesswrong.com/wiki/Mi_Casa_Lesswrong
Generating a list of houses that are willing to take visiting rationalists around the world. Feel free to add yourself.
The homepage was recently edited. I don’t like the edit and would like to rewrite it. I also don’t want to get into an editing war with other people. So if you would like to collaborate on the new front page and what you might want to see on it, the document is here:
https://docs.google.com/document/d/1DRhgrnWT31AfF5JyoHTAo7Q3HJqPZ72rmO8l018qTa4/edit
The previous front page can be found here:
https://wiki.lesswrong.com/index.php?title=Lesswrong%3AHomepage&diff=15726&oldid=15014
I am thinking we might want to emphasise (in not any order):
local meetups
discussion board + ongoing activity
global map of users (zeemaps)
lesswrong slack
lesswrong IRC
a brief history of lesswrong
the sequences and HPMOR
our friends and their blogs (mindingourway, agentyduck)
offshoots businesses, organisations and groups (EA, CFAR, MIRI, Beeminder, Complice, Mealsquares)
some other memetic ideas we follow (cryonics, transhumanism, AI, Programming, rational fiction)
some kind of list of rat-houses around the world.
the latest welcome thread
Thanks for putting this together!
I’m unsure how much info we want to put on the LW home page (I’m leaning towards less stuff is better). Are there good repositories / intro pages where we could put the rest of the info?
Also, made a few edits / comments for readability and flow on the doc.
Agree with most of your edits.
I think it’s possible to exercise Hufflepuff virtue in the act of encouraging more Ravenclaw virtue, right? That is, getting an arbitrary ball rolling is a Hufflepuff thing to do, even if you roll the ball in a Ravenclaw direction? That’s an important distinction to me.
A mid-term goal of mine is to replicate Dougherty et al.‘s MINERVA-DM in MIT/GNU Scheme (it was originally written in Pascal; no, I haven’t requested the authors’ source code, and I don’t intend to). I also intend to test at least one of its untested predictions using Amazon Mechanical Turk, barring any future knowledge that makes me think that I won’t be able to obtain reliable results (which has only become less plausible as I’ve learned more; e.g. Turkers are more representative of the U.S. population than the undergraduate population that researchers routinely sample from in behavioral experiments; there’s also a few enthusiasts who have done some work on AMT-specific methodological considerations).
MINERVA-DM is a formal model of human likelihood judgments that successfully predicts the experimental findings on conservatism), the availability heuristic, the representativeness heuristic, the base rate fallacy, the conjunction fallacy, the illusory truth effect, the simulation heuristic, and the hindsight bias. MINERVA-DM can also be described as a modified version of Bayes’ Theorem. I’m not too far yet, having just started learning Scheme/programming-in-general, but I have managed to hobble together a one-line program that outputs an n-vector with elements drawn randomly with replacement from the set {-1, 0, 1}, so I guess I’ve technically started writing the program.
It’s worth saying that I’m not very confident that MINERVA-DM won’t be overturned by a better model, and that’s not the point.
I need some sort of example, and MINERVA-DM has good properties as an example, because its math is exceedingly simple (i.e., capital-sigma notation, arithmetic mean, basic probability theory (see Bolstad’s Introduction to Bayesian Statistics, Ch.3), etc. There are probably plenty of improvements that we need to and could make as a community, but my own concern is that it’s never been winter-night-clear to me why at least some of us aren’t trying to perform (Keyword Alert!) heuristics and biases/judgment and decision making (JDM)/behavioral decision theory research on LW or on whatever conversational focus we may be using in the near- to mid-term future. There is no organization in the community for this; CFAR is the closest thing to this, and AFAICT, they are not doing basic research into H&B/JDM/BDT. People around here seem to me more likely than most to agree that you’re more likely to make progress on applications if you have a deep understanding of the problem that you’re trying to solve.
I think it is intuitive that you simply cannot productively do academic work solely in the blogosphere, and when you’re explaining a counterintuitive point, a point that is not universal background knowledge, you should recurse far enough to prevent misunderstandings in advance. I no longer find it intuitive that you can’t do a substantial amount of work on the blogosphere. For one, a good deal of academic work, especially the kind we’re collectively interested in, doesn’t require any special resources. Reviews, syntheses, analyses, critiques, and computational studies can all be done from a keyboard. As for experiments, we don’t need to buy a particle accelerator for psych research, you guys; this is where Mechanical Turk comes in. E.g. see these two blog posts wherein a scientist replicates one of Tversky and Kahneman’s base rate fallacy results with N = 66 for US$3.30, and replicates one of Tversky and Kahneman’s conjunction fallacy results with N = 50 for US$2.50. (Here’s a list with more examples.)
Arguing that there’s important academic work that doesn’t require anything but a computer (reviews, syntheses, analyses, computational studies), and demonstrating that you can test experimental predictions with your lunch money seems like a good start on preempting the ‘you can’t do real science outside of academia’ criticism. (It’s not like there isn’t a precedent for this sort of thing around here anyway.) It also prevents people from calling you a hypocrite for proposing that the community steer in a certain direction without your doing any of the pedaling. I probably would’ve kept quiet for a lot longer if I didn’t think it were important to the community to respond to calls like this article, especially considering that we may be moving to a new platform soon.
I am writing an article about fighting aging as a cause for effective altruism—early draft, suggestions welcome.
And also an article and a map about “global vs local solutions of the AI safety problem”—early draft, suggestions welcome.
Please PM me a draft of your fighting aging article if you want to—I can read it and offer feedback
Thanks, I will do it after I’ll finish to include substantial contribution which I got from the other source.
I recently launched a new service called Hermes. It connects users with dating experts for live texting advice. It runs on a unique platform designed to greatly simplify sharing and discussing text conversations. Since modern dating is changing so rapidly, especially with the rise of online dating apps and a growing population of young people glued to their phones, helping people improve their texting can greatly improve their dating life. I’ve been a software developer and dating coach for over 10 years so this is sort of my passion project.
I’d be happy to get some trial users. General feedback is greatly appreciated too.
Just tried out the Hermes trial! I found the coaches aren’t too responsive? (~ 1 hr delay between my first message and their response). I’ll see if they can help give some thoughts and give feedback later on the actual advice.
The layout is pretty cool, though!
Thanks for trying it out. Hermes is still a work in progress and one of our top priorities now is improving responsiveness.
Looking forward to helping you out!
Thank you for trying to improve the quality of the debate! If you could rewrite the most important insights as a new “sequence” that would be awesome.
If I may express my opinion, I would prefer reading a text that would not include criticism of what to me seems like a strawman of “rationalists”, and simply focus on the specific ideas. (Something like writing “2+2=4″ instead of “rationalists believe that 2+2=3, but post-rationalists believe that 2+2=4 and here is why”.) I am curious how much of post-rationality will remain after the tribal aspects are removed.
Tone arguments are often frowned upon, but these is a difference between saying “you guys are doing a specific mistake here, let me explain, because this is very important” and “you guys are hopelessly wrong, I am going away and starting my own dojo”—even if technically both of them mean “you are wrong, and I am right”.
It would be especially bad if the guy starting his own new dojo happens to be right about a specific thing X and also to be wrong about a specific thing Y. Now believing in “neither X nor Y” becomes the mark of the old tribe, and believing in “both X and Y” becomes the mark of the new tribe. Which seems to me what typically happens in politics.
I’d like to be able to consider the “postrationalist” or “metarationalist” claims individually, perhaps to agree with some, disagree with some, and express uncertaintly about some. Instead of having two separate packages, and being told to choose the better one.
(Then of course remains the problem with the identity of a “rationalist”, where I don’t expect people to agree, because that’s a thing of aesthetical preferences and social pressures. I’m not pretending any middle ground here; I enjoy the label of “rationalist” or “x-rationalist”, and I try to be the one who can cooperate and is willing to pay the cost, hoping to become stronger, as a team. I don’t think my contribution matters a lot, but I don’t see that as a reason for defecting.)
I certainly see some negative attitudes towards this sort of thing on LW, but it doesn’t look to me at all like “vague annoyance that Rationalist principles are being challenged”. Could you explain why you think that’s what it is?
(Full disclosure: your description above seems to me like an example of my snarky thesis that postrationality = knowing about rationality + feeling superior to rationalists. But I think that in feeling that way I’m being uncharitable in almost exactly the way I’m suspecting you of being uncharitable. :-) )
For what it’s worth, I’m not a fan of the notion that anything that successfully builds on rationality is a part of rationality. Not because it’s exactly wrong, but because surely it could happen that the self-identified rationalist community has a wrong or incomplete idea of what actually constitutes effective thinking. In that case, a New Improved Version should indeed be “part of rationality”, but until the actual so-called rationalists catch up it might not look that way. And if the rationalist community were sufficiently dysfunctional, calling the New Improved Version “rationality” might be counterproductive. I am not claiming that any of this is actually the case, and in particular I am not claiming that the “postrationalists” or “metarationalists” are in fact in possession of genuine improvements on LW-style rationality. But it’s not a possibility that can be ruled out a priori, and this “automatically part of rationality” thing seems to me like it fails to acknowledge the possibility.
I don’t have much to say to most of that besides nodding my head sagely. I will remark, though, that “developmental stage” theories like Kegan’s almost always rub me the wrong way, because they tend to encourage the sort of smugly superior attitude I fear I detect in much “postrationalist” talk of rationalism. I don’t think I have ever heard any enthusiast for such a theory place themselves anywhere other than in the latest “stage”.
(I don’t mean to claim that no such theory can be correct. But I mistrust the motives of those who espouse them, and I fear that the pleasure of looking down on others is a good enough explanation for much of the approval such theories enjoy that I’d need to see some actual good evidence before embracing such a theory. I haven’t particularly looked for such evidence, in Kegan’s case or any other; but nor have I seen anyone offering any.)
When at school I learned about Kohlberg’s stages of moral development, there was a nice example of a moral problem (something like the Trolley problem, but I think it was about stealing an expensive medicine to heal someone) where either side could be argued from each stage of moral development. For example, you could make a completely selfish argument for either side “I don’t care about anyone’s property” or “I don’t care about anyone’s health”, but you could also make an abstract principled argument for either side “we should optimize for orderly society” or “we should optimize for helping humans” (simplified versions). The lesson was that the degree of moral development is not the same as the position on an issue.
If I look at the “rationality / postrationality” vs “Kegan’s stages” using similar optics, I can see how people on different stages could still argue for either side. Therefore, one could “explain” either side as a manifestation of any of stages 3, 4, and 5.
If the Stage 3 is “socially determined, based on the real or imagined expectations of others”, we could argue that people who use the label “rationalists” do it because they are in the Stage 3, and they believe that other “rationalists” expect them to use this label, so they follow the social pressure. But just as well we could argue that people who avoid the label “rationalists” (and use “post-rationalists” instead) do it because their social environment disapproves of the “rationalist” label. Both sides could be following social pressure, only different social pressures, from different groups of people. Maybe “rationalists” are scared that they could lose their group identity. And maybe “post-rationalists” are scared that someone from their social group could pattern-match them to “rationalists” and consequently exclude them from their group, whatever it happens to be (academia, buddhists, cool postmodern people, etc.).
If the Stage 4 is “determined by a set of values that they have authored for themselves”, we could similarly argue that “rationalists” have chosen the rational way for themselves, in defiance of the whole society (rejected religion and mysterious answers, criticized education that teaches the teachers’ passwords), and using reason as science as their guides they found people with similar values at LessWrong, thanks to Aumann’s “great minds think alike” theorem. But just as well we could argue that “post-rationalists” have chosen the post-rational way for themselves, in defiance of the Less-Wrong “rationalists”. People from both groups can feel like heroic lonely warriors in an ignorant world dismissive to their ideas.
If people in Stage 5 are “no longer bound to any particular aspect of themselves or their history, and they are free to allow themselves to focus on the flow of their lives”, we could find supportive arguments for that, too. The zero-th virtue (“do not ask whether it is ‘the Way’ to do this or that; ask whether the sky is blue or green; if you speak overmuch of the Way you will not attain it”), internal criticism of LW as “shiny distraction” on one side; abandoning the “rationalist” label on the other side.
What most likely happens in reality is that both sides certainly attract various kinds of people. (And even according to Kegan, one person is often in multiple stages at the same time.) However, here I am going to break the symmetry and say that to me it seems the “post-rationalist” side is almost defining themselves as “we are in the Stage 5, and those who identify as ‘rationalists’ are in the Stage 4″. At least this is how it seems to me from outside. (But complaining too much about this would be the pot calling the kettle black, because “rationalists” define themselves as “we are the rational ones, in the insane world”. So in a karmic sense they deserved such comeback.)
Also, accusing other people of not being in Stage 5 feels to me like a kafkatrap. There is no way to defend against such accusation, because whatever evidence of being in Stage 5 you bring, can be dismissed by “Stage 5′s never claim to be Stage 5′s, so everything you said is evidence of you not being in Stage 5”. (But it probably doesn’t work the other way. If you admit that you are not in Stage 5, that statement will be taken at face value. At least I think so; I didn’t actually try this.) So how does one convince others that they are in Stage 5? From observation, the solution seems to be having a blog about Kegan’s stages, and judging others as not being at Stage 5 yet… if you do this, you establish yourself as an expert on Stage 5, and by definition only people on Stage 5 can be experts on Stage 5. If these are the rules of the game, I don’t want to play it. (Note how I used the same cheap status trick here: defining other people as pawns in a system, and myself as the smart one who is above and beyond the system. Meh. Oops, I did it again. I am so meta I must be at Stage 8 at least. Oops, I am doing it again. I admit the game is a bit addictive.)
For me, the “rationalist” movement is a place where people similar to me can come and find each other. (Roughly defined as: high IQ, non-religious, trying to “win”, willing to help each other, trustworthy, not interested in status games. There is probably more that I can’t easily describe here; probably some clicking on personality level.) Even most people who come to LW meetups don’t satisfy my criteria, but there at least I can find the few ones much easier than in the general population. I would be sad to lose this one coordination point. Meeting such people brings value to my life; I find it emotionally satisfying to talk with people openly about topics that interest me without having to censor my thoughts or explain long inferential distances; sometimes I also get some useful advice. At this moment I don’t see any value I could get from “post-rationality”, but I am willing to learn, as long as it doesn’t feel to me as someone just playing status games, because I have low tolerance for that.
Kegan has published a lot of evidence about the consistency of measurements his scheme. See “A guide to the subject-object interview : its administration and interpretation” Lisa Lahey [and four others]. As for validity, not so much, but it does build on the widely accepted work of others (Paiget etc), and “The evolving self” has about 8 pages of citations and references including
Kegan, R. 1976. Ego and truth: per- sonality and the Piagetian paradigm. Ph.D. dissertation, Harvard Univer- sity.
_ 1977. The sweeter welcome: Martin Buber, Bernard Malamud and Saul Bellow. Needham Heights, Mass.: Wexford.
_ 1978. Child development and health education. Principal 57 (3): 91-95.
_ 1979. The evolving self: a process conception for ego psychology. Counseling Psychologist 8 (2): 5-34.
_ 1980. There the dance is: religious dimensions of developmen- tal theory. In Toward moral and religious maturity, ed. ]. W. Fowler and A. Vergote. Morristown, N.J.: Silver Burdette.
_ 1981. A neo-Piagetian ap- proach to object relations. In The self: psychology, psychoanalysis and an- thropology, ed. B. Lee and G. Noam. New York: Plenum Press.
Not very convincing.
My summary of Kegan’s model is here. My suggestion is to try it and see if it works.
https://drive.google.com/file/d/0B_hpownP1A4PdERFVXJDVE5SRnc/view?usp=sharing
Thanks for the pointers. I’m more interested in validity than consistency here, I think.
I was intending to inform, not to convince. (I agree that no one should be convinced of anything much by my saying that I mistrust some people’s motives.)
I’m working on an overview of the science on spiritual enlightenment. I’m also looking into who has credible claims to it, whether it is something worth pursuing and a survey of the methods used to get there.
If anyone knows someone (or is someone) that thinks they might be there or part-way there and who would be willing to chat a bit, that’d be lovely. If you’ve just dabbled in some mystical practices and had a few strange experiences and want to bounce some ideas around, that could be fun too.
This blog doesn’t appear to be active anymore, but it contains a lot of helpful ideas from an LWer who was an experienced meditator.
The blog led me to buy the book The Mind Illuminated which is a very clear, thorough, secular and neurologically sound (where possible) manual on attaining classical enlightenment through vipassana+mindfulness. I’m currently trying to follow its program as well as I can.
+1 for the suggestions made by others. I will ping the blog writer about this post to see if he’s interested in reaching out.
You may also want to look at Daniel Ingram and his MCTB community
I’ve read all of Daniel Ingram’s stuff. He’s a fantastic resource. If you like his stuff, MCTB v2 is scheduled to come out later this year. The draft is much improved over the original IMO.
Specifically for meditation: I think Romeo Stevens has worked with mindfulness recently, if that’s close to what you’re looking for? (You can probably ping him here).
Mindfulness is a part of it, I’m interested in the end goal. The lasting changes in perception that are meant to come about through mindfulness or other practices.
I know of famous people in the mindfulness world (Shinzen Young, John Yates, and Bhante Gunaratana), but I don’t know them personally. Still, emailing them may be worth a shot?
I’ve chatted a little with Shinzen on one of his retreats but I haven’t yet looked into the other two. Thanks lifelonglearner.
No problem! John Yates is better known as “Culadasa”, by the way. He’s the author of The Mind Illuminated.
Oh, I feel silly, I should have just googled the names, I’m familiar with them. I know Gunaratana by his book and John Yates by his alternate name Culadasa. Thanks anyway, lifelonglearner, they’ve proven to be an excellent help.
I’m working on a primer on the planning fallacy that will cover statistics, debiasing, and general research of the topic. In the coming weeks, I’d love for some people to give quick feedback on the flow / readability of the primer, if they’re interested.
Exciting stuff.
I was going to comment about how I had taken care to make a conservative estimate. But then I decided it’d probably be better to actually finish the draft first. Now I’m here proudly announcing that I have a first draft done before my set deadline! Hooray!
Link if anyone wants to leave some helpful feedback:
https://docs.google.com/document/d/1i1cWXjmrr76hHtok5nuOz2Yml5xLk88-MQkIYW8RO8I/edit
Happy to help, send me a draft when you have it.
Sure, thanks!
Post a link or pm me.
Will do! (Expected draft finish date is in 2 weeks, so I’ll ping you then)
My internet presence and my IRL presence among my friends has fallen to about zero as I am doing a final push to graduate with my PhD in cell biology and genomics. On a job interview right now for a position studying something I am passionate about for real. Thesis being written (and Latex being learned) for, hopefully, a defense at the end of March.
Its remarkable how much data I have when I actually dig everything up from the last 3 years and lay it out side by side.
I’ve been:
1) Self-hacking into liking programming
2)Learning programming (primarily using Odin Project)
I’ve been trying to learn programming (but not in a very disciplined / systematic fashion). Would you recommend the Odin Project? (Everyday Utilitarian recommended it, IIRC, but I was turned off by the cross-linking to different places.)
How goes your self-hacking? I’ve played around w/ it for math, and the results were pretty good (If we’re talking about the generally same thing, that is.)
The self-hacking is going pretty well, considering that I started out absolutely hating programming. A problem that arises is that I don’t currently like it enough for it to be self-motivating just through personal enjoyment. I actually got a lot more accomplished when the motivation was “Do the thing that I hate (and learn to like it/ change my self-identity of hating it) so that I can get a better job (...Eventually. I like my current job, so no rush).” Now I like it well enough that the motivation is “Do that thing you like because you like it”, but there’s usually something else to do that I like better.
I’ve also done self-hacking for math and mathy subjects, but it was before I would have known of the term. It worked rather well!
Odin Project is more of a slog, but it seems like it will get you where you need to go. I had a lot more FUN on sites like Codewars, which was more useful for the self-hacking part.
Hm, thanks for your thoughts on the matter. I’ve noticed too, that once I get a thing to be “not too terrible”, then it feels less like I have to work on it. But then I’ll just prioritize other things over it.
I’d like there to be some kind of list of rat-houses around the world. But I can’t champion this project. I also live on my own.
I’m working next week on what I call user aligned computing.
Video https://www.youtube.com/watch?v=XQgtVdyNzaQ code here https://github.com/eb4890/agorint.
It might be a bit like the control problem for normal computers (but mainly with a separate evolutionary pathway ), it doesn’t assume the thing it is trying to control is a super intelligent genie with its own goals. It is a user controlled market which determines the programs the systems access to resources such as memory and processing power and also I/O.
I am hoping might be part of program of Intelligence Amplification and it would make computers more secure in general (less of a monoculture with easily acquirable ambient authority) so might have impact on some fast take off scenarios.
I’m about to start making the market work, having got some basic infrastructure working.