I’m also posting a bounty for suggesting good candidates: $1000 for successful leads on a new project manager; $100 for leads on a top 5 candidate
We will pay you $1000 if you:
Send us the name of a person…
…who we did not already have on our list…
…who we contacted because of your recommendation...
...who ends up taking on the role
We will pay you $100 if the person ends up among the top 5 candidates (by our evaluation), but does not take the role (given the other above constraints).
There’s no requirement for you to submit more than just a name. Though, of course, providing intros, references, and so forth, would make it more likely that we could actually evaluate the candidate.
NO bounty will be awarded if you...
Mention the person who actually gets hired, but we never see your message
Mention a person who does not get hired/become a top 5 candidate
Nominate yourself and get hired
If multiple people nominate the same person, bounty goes to the first person whose nomination we actually read and act on
Remaining details will be at our discretion. Feel free to ask questions in comments.
You can private message me here.
In a few weeks, a number of public figures may find themselves doing an awkward about-face from “masks don’t work and no one should wear them” to “masks do work and they are mandatory”.
I want to record and reward how this prediction seems to be correct: https://www.washingtonpost.com/health/2020/04/02/coronavirus-facemasks-policyreversal/
We used parameters based on a paper modelling Wuhan, that found that ~2 day infectious period predicted spread the best.
Adding cumulative statistics is in the pipeline; I or one of the devs might get around to it today.
There’s currently a Foretold community attempting to answer this question here, using both general Guesstimate models and human judgement taking into account the nuances of each country. We’ve hired some superforecasters from Good Judgement who will start working on it in a few days.
(Tangential: as part of the Epidemic Forecasting project at FHI we are feeding this data into GLEAM, which is a global SEIR model running on high-performance computers, based on a database of millions of airline and commute connections. The model also tries to factor in in seasonality, air traffic reductions, and effectiveness of various containment measures.)
After working on a pandemic forecasting dashboard for a week, I should add an additional reason for why this is a good opportunity:
Access to resources
Software developers are an incredibly scarce resource, and they’ll charge massive salaries compared to many other jobs. But over the last week, I’ve received numerous offers from devs who are willing to volunteer 15+ hours a week.
Human attention is also scarce and it’s hard to contact people. But when our team reached out to more senior connections or collaborators, we’ve had a 100% reply rate.
If you’re working on important covid-19 projects, there’s an incredible number of people willing to help out at prices far below market rate.
If this was the case it ought to be visible indirectly through its effect on Ohio’s healthcare system. I haven’t heard of such reports (and I do follow the situation fairly closely), but I haven’t looked for them either.
I adapted Eli Tyre’s model into a spreadsheet where you can calculate current number of cases in your country (by extrapolating from observed cases using some assumptions about doubling time and confirmation rate).
I made a new version of your spreadsheet where you can select your location (from the John Hopkins list), instead of just looking at the Bay area.
Whereas the local steps are fairly clear, after a quick read I found it moderately confusing what this model was doing at a high-level, and think some distillation could be helpful.
There is a 5% chance of getting critical form of COVID (source: WHO report)
That’s a 40-page report and quickly ctrl-f:ing “5 %” didn’t find anything to corroborate your claim, so it would be helpful if you could elaborate on that.
What time zone will this be in?
There’s a >20% chance I’ll join. There’s a much higher chance I’ll show up to write some comments (which can also be an important thing).
I’m happy you’re making this happen.
I think it’s useful to be able to translate between different ontologies
This is one thing that is done very well by apps like Airtable and Notion, in terms of allowing you to show the same content in different ontologies (table / kanban board / list / calendar / pinterest-style mood board).
Similarly, when you’re using Roam for documents, you don’t have to decide upfront “Do I want to have high-level bullet-points for team members, or for projects?“. The ability to embed text blocks in different places means you can change to another ontology quite seamlessly later, while preserving the same content.
Ozzie Gooen pointed out to me that this is perhaps an abuse of terminology, since “the semantic data is the same, and that typically when ‘ontology’ is used for code environments, it describes what the data means, not how it’s displayed.”
I think in response, the thing I’m pointing at that seems interesting is that there is a bit of a continuum between different displays and different semantic data — two “displays” which are easily interchangeable in Roam will not be in Docs or Workflowy, as they lack the “embed bullet-point” functionality. Even though superficially they’re both just bullet-point lists.
So far about 30′000 questions have been answered by about 1′300 users since the end of December 2019.
So far about 30′000 questions have been answered by about 1′300 users since the end of December 2019.
That’s a surprisingly high number of people. Curious where they came from?
If you look at the top 10-20 or so post, as well as a bunch of niche posts about machine learning and AI, you’ll see the sort of discussion we tend to have best on LessWrong. I don’t come here to get ‘life-improvements’ or ‘self-help’, I come here much more to be part of a small intellectual community that’s very curious about human rationality.
I wanted to follow up on this a bit.
TLDR: While LessWrong readers tangentially care a lot about self-improvement, reading forums alone likely won’t have a big effect on life success. But that’s not really that relevant; the most relevant thing to look at is how much progress the community have done on the technical mathematical and philosophical questions it has focused most on. Unfortunately, that discussion is very hard to have without spending a lot of time doing actual maths and philosophy (though if you wanted to do that, I’m sure there are people who would be really happy to discuss those things).
If what you wanted to achieve was life-improvements, reading a forum seems like a confusing approach.
Things that I expect to work better are:
personally tailored 1-on-1 advice (e.g. seeing a sleep psychologist, a therapist, a personal trainer or a life coach)
working with great mentors or colleagues and learning from them
deliberate practice ― applying techniques for having more productive disagreements when you actually disagree with colleagues, implementing different productivity systems and seeing how well they work for you, regularly turning your beliefs into predictions and bets checking how well you’re actually reasoning
taking on projects that step the right distance beyond your comfort zone
just changing whatever part of your environment makes things bad for you (changing jobs, moving to another city, leaving a relationship, starting a relationship, changing your degree, buying a new desk chair, …)
And even then, realistic expectations for self-improvement might be quite slow. (Though the magic comes when you manage to compound such slow improvements over a long time-period.)
There’s previously been some discussion here around whether being a LessWrong reader correlates which increased life success (see e.g. this and this).
As a community, the answer seems to be overwhelmingly positive. In the span of roughly a decade, people who combined ideas about how to reason under uncertainty with impartial altruistic values, and used those to conclude that it would be important to work on issues like AI alignment, have done some very impressive things (as judged by an outside perspective). They’ve launched billion dollar foundations, set up 30+ employee research institutes at some of the worlds most prestigious universities, and gotten endorsements from some of the world’s richest and most influential people, like Elon Musk and Bill Gates. (NOTE: I’m going to caveat these claims below.)
The effects on individual readers are a more complex issue and the relevant variables are harder to measure. (Personally I think there will be some improvements in something like “the ability to think clearly about hard problems”, but that that will largely stem from readers of LessWrong already being selected for being the kinds of people who are good at that.)
Regardless, like Ben hints at, this partly seems like the wrong metric to focus on. This is the caveat.
While interested in self-improvement, one of the key things people at LessWrong have been trying to get at is reasoning safely about super intelligences. To take a problem that’s far in the future, where the stakes are potentially very high, where there is no established field of research, and where thinking about it can feel weird and disorienting… and still trying to do so in a way where you get to the truth.
So personally I think the biggest victories are some impressive technical progress in this domain. Like, a bunch of maths and much conceptual philosophy.
I believe this because I have my own thoughts about what seems important to work on and what kinds of thinking make progress on those problems. To share those with someone who haven’t spent much time around LessWrong could take many hours of conversation. And I think often they would remain unconvinced. It’s just hard to think and talk about complex issues in any domain. It would be similarly hard for me to understand why a biology PhD student thinks one theory is more important than another relying only on the merits of the theories, without any appeal to what other senior biologists think.
It’s a situation where to understand why I think this is important someone might need to do a lot of maths and philosophy… which they probably won’t do unless they already think it is important. I don’t know how to solve that chicken-egg problem (except for talking to people who were independently curious about that kind of stuff). But my not being able to solve it doesn’t change the fact that it’s there. And that I did spend hundreds of hours engaging with the relevant content and now do have detailed opinions about it.
So, to conclude… people on LessWrong are trying to make progress on AI and rationality, and one important perspective for thinking about LessWrong is whether people are actually making progress on AI and rationality. I’d encourage you (Jon) to engage with that perspective as an important lens through which to understand LessWrong.
Having said that, I want to note that I’m glad that you seem to want to engage in good faith with people from LessWrong, and I hope you’ll have some interesting conversations.
I’d be quite curious about more concrete examples of systems where there is lots of pressure in *the wrong direction*, due to broken alarms. (Be they minds, organisations, or something else.) The OP hints at it with the consulting example, as does habryka in his nomination.
I strongly expect there to be interesting ones, but I have neither observed any nor spent much time looking.
That seems like weak evidence of karma info-cascades: posts with more karma get more upvotes *simply because* they have more karma, in a way which ultimately doesn’t correlate with their “true value” (as measured by the review process).
Potential mediating causes include users being anchored by karma, or more karma causing a larger share of the attention of the userbase (due to various sorting algorithms).
Overall I’m still quite confused, so for my own benefit, I’ll try to rephrase the problem here in my own words:
Engaging seriously with CFAR’s content adds lots of things and takes away a lot of things. You can get the affordance to creatively tweak your life and mind to get what you want, or the ability to reason with parts of yourself that were previously just a kludgy mess of something-hard-to-describe. You might lose your contentment with black-box fences and not applying reductionism everywhere, or the voice promising you’ll finish your thesis next week if you just try hard enough.
But in general, simply taking out some mental stuff and inserting an equal amount of something else isn’t necessarily a sanity-preserving process. This can be true even when the new content is more truth-tracking than what it removed. In a sense people are trying to move between two paradigms—but often without any meta-level paradigm-shifting skills.
Like, if you feel common-sense reasoning is now nonsense, but you’re not sure how to relate to the singularity/rationality stuff, it’s not an adequate response for me to say “do you want to double crux about that?” for the same reason that reading bible verses isn’t adequate advice to a reluctant atheist tentatively hanging around church.
I don’t think all techniques are symmetric, or that there aren’t ways of resolving internal conflict which systematically lead to better results, or that you can’t trust your inside view when something superficially pattern matches to a bad pathway.
But I don’t know the answer to the question of “How do you reason, when one of your core reasoning tools is taken away? And when those tools have accumulated years of implicit wisdom, instinctively hill-climbing to protecting what you care about?”
I think sometimes these consequences are noticeable before someone fully undergoes them. For example, after going to CFAR I had close friends who were terrified of rationality techniques, and who have been furious when I suggested they make some creative but unorthodox tweaks to their degree, in order to allow more time for interesting side-projects (or, as in Anna’s example, finishing your PhD 4 months earlier). In fact, they’ve been furious even at the mere suggestion of the potential existence of such tweaks. Curiously, these very same friends were also quite high-performing and far above average on Big 5 measures of intellect and openness. They surely understood the suggestions.
There can be many explanations of what’s going on, and I’m not sure which is right. But one idea is simply that 1) some part of them had something to protect, and 2) some part correctly predicted that reasoning about these things in the way I suggested would lead to a major and inevitable life up-turning.
I can imagine inside views that might generate discomfort like this.
“If AI was a problem, and the world is made of heavy tailed distributions, then only tail-end computer scientists matter and since I’m not one of those I lose my ability to contribute to the world and the things I care about won’t matter.”
“If I engaged with the creative and principled optimisation processes rationalists apply to things, I would lose the ability to go to my mom for advice when I’m lost and trust her, or just call my childhood friend and rant about everything-and-nothing for 2h when I don’t know what to do about a problem.”
I don’t know how to do paradigm-shifting; or what meta-level skills are required. Writing these words helped me get a clearer sense of the shape of the problem.
(Note: this commented was heavily edited for more clarity following some feedback)