Aspiring rationalist in Ottawa, Canada.
StefanDeYoung
One of the main reasons to have a community blog like Less Wrong is to create common knowledge. I see this kind of summary/highlight post as doing a similar kind of work to the canonization that Raemon write about in his Peer Review post.
Thank you.
The No do-overs section reminded me of a recent conversation. A friend was giving me a lift home from a rationality meetup, we got off of the highway, and I told him to turn right. We should have turned left. Once we realised my mistake, I apologized. His response was something along the lines of “We’ve just been talking for the last three hours. Why do you believe I’d be averse to spending another five minutes with you?”
The feeling I had wasn’t really that there was any badness to spending more time talking, but I knew that he was meeting someone else after dropping me off, and I didn’t want to make him late. I dislike being late. I projected that feeling on to him.
No do-overs is also often felt when you forget people’s names.
Also, I want to congratulate you for writing the exam. :)
I need help figuring out how to use this scoring rule. Please consider the following application.
How much does it cost to mail a letter under 30g in Canada?
I remember when I was a child buying 45c stamps, so it’s likely to be larger than that. It’s been over a decade or so, and assuming a 2% rise in cost per year, then we should be around c per stamp. However, we also had big budget cuts to our postal service that even I learned about despite not reading the news. Let’s say that Canada Post increased their prices by 25% to accomodate some shortfall. My estimate is that stamps cost 75c.
What should be my confidence interval? Would I be surprised if a stamp cost a dollar? Not really, but it feels like an upper bound. Would I be surprised if a stamp cost less than 50c? Yes. 60c? Yes. 70c? Hmmm.… Assume that I’m well calibrated, so I’m reporting 90% confidence for an interval of stamps costing 70c to 100c.
Answer: Stamps in booklets cost 85c each, individual stamps are 100c each. Because I would always buy stamps in booklets, I will use the 85c figure.
S is the size of my confidence interval, . D is the distance between the true value and the interval, but is 0 in this case because the true value is in the interval.
I’m not really sure what to do with this number, so let’s move to the next paragraph of the post.
The true value is and the interval is . Because the true value is contained in the interval, .
How does this incentivise honest reporting of confidence intervals?
Let’s say that, when I intuited my confidence interval above that I was perturbed that it wasn’t symmetric about my estimate of 75c, so I set it to for aesthetic reasons. In this case, my score would be Which is worse than my previous score by a factor of 2.
Let’s say that, when I remembered the price of stamps in my childhood, I was way off and remembered 14c stamps. Then I would believe that stamps should cost around 22c now. (Here I have the feeling of “nothing costs less than a quarter!”, so I would probably reject this estimate.)That would likely anchor me, so that I would set a high confidence on the price being within
,
Am I trying to maximize this score?
I looked up the answer, and the lowest cost standard delivery is for letters under 30g.
I like this approach because it follows the approach of Taking the Obvious Advice, and because of its focus on operationalising rationality rather than seeking insight porn.
As a short-term solution, would a Google Sheet work? I believe that you could then use a Google Form to populate the sheet. Here’s your example data in a spreadsheet.
I will return to this thread on December 11, 2017 to see if anyone else has subscribed to this project. I’m unable to commit any time prior to that date.
“You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” -HPMOR Chapter 75
Reality doesn’t grade on a curve.
In Subduing Moloch, Teja suggests intentionally creating a channel for rationalists to have one-on-one conversations with each other. As a result, he and I have already had a video chat, and we’ve joined the LessWrong Slack in order to determine if that might be an appropriate venue to build this project.
I intend to book a conversation with you, and I will also consider creating a similar Calendly system for people to book time with me.
I disagree that participants would already have to be superhuman, or even particularly strong rationalists. We can all get stronger together through mutual support even though none of us may already be “big-R Rationalists.”
In his post about explore/exploit tradeoffs, Sebastian Marshall remembers how Carlos Micelli scheduled a skype call everyday to improve his network and his English. I haven’t looked into how many of the people Micelli called were C-suite executives or research chairs or other similar high-status individuals. My guess is that he could have had good results speaking with interesting and smart people on any topic.
For myself, I remember a meetup that I attended in November last year. I was feeling drained by a day job that is not necessarily aligned with my purpose. The event itself was a meeting to brainstorm changes to the education system in Canada, which is also not necessarily aligned with my purpose. However, the charge and energy I got simply from speaking to smart people about interesting things was, and I want to stress this, amazing. For weeks afterwards, the feeling that I got from attending that meeting was all that I wanted to talk about.
If I could get that feeling everyday...
Can you explain how the not-turning-on-the-phone drill would increase preparedness for the advent of AGI? Is it that it is a demonstration of humanity’s ability to coordinate on a massive scale?
Really appreciate the level of detail provided. My usual problems with “How To” type content are either “this is too specific, so I can’t see how to generalise” or “this is overly broad, and I’m not able to generate my own specific examples.” This post was very specific, so avoided the latter failure, and very long so that I got enough content from which to generalise.
Thank you!
You’re welcome. Something that I’m trying to improve about how I engage with lesswrong is writing out either a summary of the article (without re-refering to the article) or an explicit example of the concept in the article. My hope is that this will help me to actually grok what we’re discussing.
I remember at the start of each year of high-school having the experience of realising just how stupid and ignorant I had been the previous year. And each year, I was surprised to have the same experience. This revealed to me, I think, that I’m more episodic than diachronic in that I dissociate from my past selves.
I appreciate the advice here to have a more diachronous meta-personality. To implement this, I intend to double-down on keeping a journal. I’ve struggled with this habit before, but upon rereading journal entries from a year ago, I have received insights into how to improve my life in the present.
This is very well written. The anecdote at the start especially. Thank you for sharing.
I agree that an hour a day is a large time comitment. I couldn’t agree to spend an our of my time on this project. I would prefer a smaller time increment by default. For example, calls could be multiples of 15 minutes with participants able to schedule themselves for multiple increments if desired. I’m sensitive to your point that choices are bad, but peoples’ schedules will be so widely varying that being able to choose if you want to talk for 1,2,3, or 4 intervals during any given week would allow this to reach a much wider group.
To your point that we should have a concrete set of suggestions for what to do on the call, agendas are essential.
I hadn’t read these sequences as part of LW 1.0, so thank you very much for bringing them back into the spotlight. Is there contained within them a listing of habits that have been useful to those aspiring to implement instrumental rationality? Is there a compendium of what obvious advice is on offer in various domains?
The E and I in “high-E, low-I” are empiricism and inference?
In her recent post about working remotely, Julia Evans mentions donut.ai as a slack plugin that randomly pairs members of a slack channel for discussions.
Can you provide additional details regarding eating Mealsquare instead of Soylent?
Thanks for the tip. I am sensitive to the limits of my own willpower.
A strategy that was working for me was keeping my daily tasks/to-do lists and my journal in the same book. That way, I needed to check into my book in order to do my work, and would be able to intersperse journaling in between lists as the urge arose.
At what point do we judge that our map of this particular part of the territory is sufficiently accurate, and accept the level of explanation that we’ve reached?
If we’re going to keep pulling on the thread of “why are the dominoes on the floor” past “Zacchary did it” then we need to know why we’re asking. Are we trying to prevent future messes? Are we concerned about exactly how our antique set of dominoes was treated? Are we trying to figure out who should clean this mess?
If we’re only trying to figure out who should clean the mess, then “Zacchary did it” is sufficient, and we can stop looking deeper.
I’m having trouble with formatting. Here is what I was trying to write, less my attempts to include links:
Greetings, LessWrong.
I’m a 21 y/o Physics undergrad at the University of Waterloo. I’m currently finishing a coop work-term at the Grand River Regional Cancer Centre. I’m also trying to build a satellite www.WatSat.ca.
My girlfriend recommended that I read HPMoR—which I find delightful—but I thought LessWrong a strange penname. I followed the links back here, and spent a month or so skimming the site. I’m happy to find a place on the internet where people are happy to provide constructive criticism in support of self-optimization. I’m also particularly intrigued by this Bayesian Conspiracy you guys have going.
I tend to lurk on sites like this, rather than actually joining the community. However, I discovered a call for a meetup in Waterloo http://lesswrong.com/r/discussion/lw/790/are_there_any_lesswrongers_in_the_waterloo/, and I couldn’t help myself.