Does the LessWrong editor’s documentation on its handling of LaTeX answer your question?
jimv
Quick note of caution on changing the incentive rate downwards. If you might be sampling in times/places where people have previously experienced the higher incentive, this might trigger a loss framing for people. I.e., if you move to the busier time that will eventually be your normal collection time and location, and give the people who are around then the perception that $5 is the normal compensation they’ll get, when that drops to $1 they’ll maybe be less inclined to contribute their sample than if they’d never seen the $5 option.
Maybe you could moderate that effect by some creative design like the board saying $1 prominently, then appending “special today $5” somehow in a way that clearly communicates it’s a special temporary extra.
Checking about 2 years after my initial post, it looks like $TSLA has fallen by more than 50%: it looks like the split-adjusted price in early April 2022 was around $330 or $340, and today it’s around $145.
Eyeballing the chart, it looks like it’s always been lower than that in the subsequent period, and was down to around $185 at the 12 month mark that was initially the target of the competition. That last bit is the bit that was least clear to me at the time: it seemed high probability that Tesla stock would have to fall at some point, but I expressed uncertainty about when because I thought there was a fair probability the market could stay irrational for a longer period.
What timezone(s) will this programme be running in, please?
Is the opening paragraph at the top of this article the prompt you have Claude or text for us?
If the latter, could you share the prompt here, please?
There’s this nice paper where a load of different researchers are given the same (I think simulated) data and looked at how researchers result.
Might the research you were thinking of be the work by raphael Silberzahn, Eric L. Uhlmann and Brian Nosek?
Nature comment: https://www.nature.com/articles/526189a
Full research article: https://journals.sagepub.com/doi/10.1177/2515245917747646
In the UK, I think the most common assumption for cauliflower ear would be playing rugby, rather than a combat sport.
No idea if that’s the statistically correct inference from seeing someone with the condition.
I enjoyed filling this out!
The question here is the opposite of its title:
Unknown features Which of the following features of the LessWrong website did you know how to use before you read this question?
That could result in some respondents answering in reverse if they skim.
As well as the generic suggestions people are making in the answers, it seems like you might be able to get more specific suggestions if the question specified whether you’re looking for long distance vs. nearby/in-person dating, and (if the latter) a rough idea of where you are located.
You’ve got an asterisk in the first sentence, but I couldn’t see it referencing anything.
~1.2e16 bases annually
Is this a typo? If I’m reading the chart correctly, it looks like it’s of the order 1.2e15.
Blaise Pascal – the I Think Therefore I Am guy
The ‘I think therefore I am’ guy was René Descartes.
I am strongly don’t buy
Grammar: delete “am”
If the market are
Grammar: “market is” or “markets are”
You mention here that “of course” you agree that AI is the dominant risk, and that you rate p(doom) somewhere in the 5-10% range.
But that wasn’t at all clear to me from reading the opening to the article.
Eliezer Yudkowsky predicts doom from AI: that humanity faces likely extinction in the near future (years or decades) from a rogue unaligned superintelligent AI system. … I have evaluated this model in detail and found it substantially incorrect...
As written, that opener suggests to me that you think the overall model of doom being likely is substantially incorrect (not just the details I’ve elided of it being the default).
I feel it would be very helpful to the reader to ground the article from the outset with the note you’ve made here somewhere near the start. I.e., that your argument is with the specific doom case from EY, that you retain a significant p(doom), but that it’s based on different reasoning.
Years back I heard that 10 is a bad number for this kind of thing.
The reasoning goes that because it’s a round number people will assume that you chose it to be round and that at least some of your entries are filler to get up that number.
Whereas if you have 11 reasons or 7 or whatever, people will think that number is the actual number you needed to make all your points.
Have you looked at the Guild of the Rose at all?
UK-based researchers interested in this subject, and potentially international collaborators, could apply to the recently-announced UKRI funding opportunity:
https://www.ukri.org/opportunity/ageing-research-development-awards/
The UK effectively charges a very large tax for access to its postcode address file, making it out of reach for many uses and also being effectively a large tax on business, especially small business, that requires the information. This seems like quite an insane place to collect this much government revenue.
The article you link to points out that the PAF is controlled by Royal Mail. I think that Royal Mail gets the revenue from this. Royal Mail isn’t government revenue. It was privatised several years ago.
Are either of these relevant?
https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real
https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased