Congratulations on running a year of meetups! That’s not easy.
In the past I’ve had difficulty pinning down an appropriate meeting schedule. Was there discussion in your group over the meetup frequency? When you rebooted the group, was it explicitly as a weekly group? How well did the first few members know each other before the decision to meet weekly was made?
The No do-overs section reminded me of a recent conversation. A friend was giving me a lift home from a rationality meetup, we got off of the highway, and I told him to turn right. We should have turned left. Once we realised my mistake, I apologized. His response was something along the lines of “We’ve just been talking for the last three hours. Why do you believe I’d be averse to spending another five minutes with you?”
The feeling I had wasn’t really that there was any badness to spending more time talking, but I knew that he was meeting someone else after dropping me off, and I didn’t want to make him late. I dislike being late. I projected that feeling on to him.
No do-overs is also often felt when you forget people’s names.
Also, I want to congratulate you for writing the exam. :)
“You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” -HPMOR Chapter 75
Reality doesn’t grade on a curve.
One of the main reasons to have a community blog like Less Wrong is to create common knowledge. I see this kind of summary/highlight post as doing a similar kind of work to the canonization that Raemon write about in his Peer Review post.
The E and I in “high-E, low-I” are empiricism and inference?
This is very well written. The anecdote at the start especially. Thank you for sharing.
In Subduing Moloch, Teja suggests intentionally creating a channel for rationalists to have one-on-one conversations with each other. As a result, he and I have already had a video chat, and we’ve joined the LessWrong Slack in order to determine if that might be an appropriate venue to build this project.
I intend to book a conversation with you, and I will also consider creating a similar Calendly system for people to book time with me.
In her recent post about working remotely, Julia Evans mentions donut.ai as a slack plugin that randomly pairs members of a slack channel for discussions.
Do you see Skeptical Modernism as a new movement in philosophy, or can you point to a previous body of work on this subject?
I agree that an hour a day is a large time comitment. I couldn’t agree to spend an our of my time on this project. I would prefer a smaller time increment by default. For example, calls could be multiples of 15 minutes with participants able to schedule themselves for multiple increments if desired. I’m sensitive to your point that choices are bad, but peoples’ schedules will be so widely varying that being able to choose if you want to talk for 1,2,3, or 4 intervals during any given week would allow this to reach a much wider group.
To your point that we should have a concrete set of suggestions for what to do on the call, agendas are essential.
I disagree that participants would already have to be superhuman, or even particularly strong rationalists. We can all get stronger together through mutual support even though none of us may already be “big-R Rationalists.”
In his post about explore/exploit tradeoffs, Sebastian Marshall remembers how Carlos Micelli scheduled a skype call everyday to improve his network and his English. I haven’t looked into how many of the people Micelli called were C-suite executives or research chairs or other similar high-status individuals. My guess is that he could have had good results speaking with interesting and smart people on any topic.
For myself, I remember a meetup that I attended in November last year. I was feeling drained by a day job that is not necessarily aligned with my purpose. The event itself was a meeting to brainstorm changes to the education system in Canada, which is also not necessarily aligned with my purpose. However, the charge and energy I got simply from speaking to smart people about interesting things was, and I want to stress this, amazing. For weeks afterwards, the feeling that I got from attending that meeting was all that I wanted to talk about.
If I could get that feeling everyday...
Thanks for this reply. The technique of asking what each term of your equation represents is one I have not practiced in some time.
This answer very much helped me to understand the model.
You’re welcome. Something that I’m trying to improve about how I engage with lesswrong is writing out either a summary of the article (without re-refering to the article) or an explicit example of the concept in the article. My hope is that this will help me to actually grok what we’re discussing.
I need help figuring out how to use this scoring rule. Please consider the following application.
How much does it cost to mail a letter under 30g in Canada?1
I remember when I was a child buying 45c stamps, so it’s likely to be larger than that. It’s been over a decade or so, and assuming a 2% rise in cost per year, then we should be around 45∗(1.02)10∼60c per stamp. However, we also had big budget cuts to our postal service that even I learned about despite not reading the news. Let’s say that Canada Post increased their prices by 25% to accomodate some shortfall. My estimate is that stamps cost 75c.
What should be my confidence interval? Would I be surprised if a stamp cost a dollar? Not really, but it feels like an upper bound. Would I be surprised if a stamp cost less than 50c? Yes. 60c? Yes. 70c? Hmmm.… Assume that I’m well calibrated, so I’m reporting 90% confidence for an interval of stamps costing 70c to 100c.
Answer: Stamps in booklets cost 85c each, individual stamps are 100c each. Because I would always buy stamps in booklets, I will use the 85c figure.
S is the size of my confidence interval, S=100−70=30 . D is the distance between the true value and the interval, but is 0 in this case because the true value is in the interval.
I’m not really sure what to do with this number, so let’s move to the next paragraph of the post.
The true value is T=85 and the interval is (L,U)=(70,100). Because the true value is contained in the interval, D=0.
How does this incentivise honest reporting of confidence intervals?
Let’s say that, when I intuited my confidence interval above that I was perturbed that it wasn’t symmetric about my estimate of 75c, so I set it to (L,U)=(50,100) for aesthetic reasons. In this case, my score would be Score=−0.30 Which is worse than my previous score by a factor of 2.
Let’s say that, when I remembered the price of stamps in my childhood, I was way off and remembered 14c stamps. Then I would believe that stamps should cost around 22c now. (Here I have the feeling of “nothing costs less than a quarter!”, so I would probably reject this estimate.)That would likely anchor me, so that I would set a high confidence on the price being within (L,U)=(20,24)
Am I trying to maximize this score?
1I looked up the answer, and the lowest cost standard delivery is for letters under 30g.
Another reason to become better at referring is to grow your network. I have in mind referring people to specific coaching. If we’re referring aspiring rationalists to people outside the community, those people outside the community will be incentivised to engage with us.
Can you provide additional details regarding eating Mealsquare instead of Soylent?
Can you explain how the not-turning-on-the-phone drill would increase preparedness for the advent of AGI? Is it that it is a demonstration of humanity’s ability to coordinate on a massive scale?
I like this approach because it follows the approach of Taking the Obvious Advice, and because of its focus on operationalising rationality rather than seeking insight porn.
As a short-term solution, would a Google Sheet work? I believe that you could then use a Google Form to populate the sheet. Here’s your example data in a spreadsheet.
I will return to this thread on December 11, 2017 to see if anyone else has subscribed to this project. I’m unable to commit any time prior to that date.
I have also seen other users post about using Anki cards to remember insights from LW. However, I’ve had difficulty with formulating good flashcards related to this material.
Right now, I have a card for the Litany of Tarski. On one side is the litany (“If the sky is blue...”) on the other side, “Litany of Tarski.” When I see the card, I try to recite the litany in that form, but also consider the underlying idea that there is a territory to be mapped, and that the map is supposed to reflect the territory. I might also create a new litany with some other object than the blue sky.
Is this the kind of card that you create? Can you give an example of how you use a card to remind yourself of insights rather than definitions?
Your plan currently only addresses ex-risk from AGI. However, there are several other problems that should be considered if your goal is to prevent global catastrophe. I have recently been reading 80000 Hours, and they have the following list of causes that may need to be included in your plan: https://80000hours.org/articles/cause-selection/
In general, I think that it’s difficult to survey a wide topic like AI Alignment or Existential Risk, and, with granularity, write out a to-do list for solving them. I believe that people who work more intimately with each ex-risk would be better suited to develop the on-the-ground action plan.
It is likely that a variety of ex-risks would be helped by reaching for similar goals, in which case, high-level coordinated action plans developed by groups focused on each ex-risk would be useful to the community. If possible, try to attend events such as EA conferences where groups focusing on each of the possible global catastrophes will be present, and you can try to capture their shared action plans.