My sense is that many answers so far come more from a place of sitting on the sidelines or having waded in a bit, found rationality not obviously helpful in the first place.
That seems for me a strange result from going through the list of people who answered. All have >1000 karma on LessWrong. Most (all expect Elo) are more then 6 years on LessWrong.
It would surprise me if any of the people have spent less then 100 hours learning/thinking about how to make rationality work.
I myself spent years thinking about how to make calibration work. I tested multiple systems created by LessWrongers. That engagement with the topic lead me to an answer of how I think medicine could be revolutionized. But I’m still lacking a way to make it actually practical for my daily life.
I think “How to Measure Anything” is a useful book to get a sense of how professional rationality might actually look [...] But they do need at least some people who are good at that (and they need other people to listen to them, and a CEO or hiring specialist who can identify such people).
YCombinator tells their startups to talk to their user and do things that don’t scale instead of hiring a professional rationalist to help them navigate uncertainty. To me that doesn’t look like it’s changing.
It’s a bit ridiculous to treat the problem of what rationality actually is as solved and hold convictions that we are going to have rationality specialists.
FWIW: I’m not sure I’ve spent >100 hours on a ‘serious study of rationality’. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.
On the topic ‘under the hood’ here:
I sympathise with the desire to ask conditional questions which don’t inevitably widen into broader foundational issues. “Is moral nihilism true?” doesn’t seem the right sort of ‘open question’ for “What are the open questions in Utilitarianism?”. It seems better for these topics to be segregated, no matter the plausibility or not for the foundational ‘presumption’ (“Is homeopathy/climate change even real?” also seems inapposite for ‘open questions in homeopathy/anthropogenic climate change’). (cf. ‘This isn’t a 101-space’).
That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the ‘Project’ (even if this seems to be taken very broadly to normative issues in general—if Wei_Dai’s list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around “How can one best make decisions on unknown domains with scant data”, the superforecasting literature seems some of the lowest hanging fruit to pluck.
Yet community competence in these areas has apparently declined. If you google ‘lesswrong GJP’ (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here’s something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could ‘try at home’. (The same applies to RQ: Sotala wrote a cool sequence on Stanovich’s ‘What intelligence tests miss’, but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)
I don’t understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don’t see there being a ‘lessons from the superforecasting literature’ write-up here (I am slowly working on one myself).
Maybe I just missed the memo and many people have kept abreast of this work (ditto other ‘relevant-looking work in academia’), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of ‘is there anything there?’ for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.
[Minor] I think the first para is meant to be block-quoted?
I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we’ve seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)
Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.
It’s my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.
That seems for me a strange result from going through the list of people who answered. All have >1000 karma on LessWrong. Most (all expect Elo) are more then 6 years on LessWrong.
It would surprise me if any of the people have spent less then 100 hours learning/thinking about how to make rationality work.
I myself spent years thinking about how to make calibration work. I tested multiple systems created by LessWrongers. That engagement with the topic lead me to an answer of how I think medicine could be revolutionized. But I’m still lacking a way to make it actually practical for my daily life.
YCombinator tells their startups to talk to their user and do things that don’t scale instead of hiring a professional rationalist to help them navigate uncertainty. To me that doesn’t look like it’s changing.
It’s a bit ridiculous to treat the problem of what rationality actually is as solved and hold convictions that we are going to have rationality specialists.
FWIW: I’m not sure I’ve spent >100 hours on a ‘serious study of rationality’. Although I have been around a while, I am at best sporadically active. If I understand the karma mechanics, the great majority of my ~1400 karma comes from a single highly upvoted top level post I wrote a few years ago. I have pretty sceptical reflexes re. rationality, the rationality community, etc., and this is reflected in that (I think) the modal post/comment I make is critical.
On the topic ‘under the hood’ here:
I sympathise with the desire to ask conditional questions which don’t inevitably widen into broader foundational issues. “Is moral nihilism true?” doesn’t seem the right sort of ‘open question’ for “What are the open questions in Utilitarianism?”. It seems better for these topics to be segregated, no matter the plausibility or not for the foundational ‘presumption’ (“Is homeopathy/climate change even real?” also seems inapposite for ‘open questions in homeopathy/anthropogenic climate change’). (cf. ‘This isn’t a 101-space’).
That being said, I think superforecasting/GJP and RQ/CART etc. are at least highly relevant to the ‘Project’ (even if this seems to be taken very broadly to normative issues in general—if Wei_Dai’s list of topics are considered elements of the wider Project, then I definitely have spent more than 100 hours in the area). For a question cluster around “How can one best make decisions on unknown domains with scant data”, the superforecasting literature seems some of the lowest hanging fruit to pluck.
Yet community competence in these areas has apparently declined. If you google ‘lesswrong GJP’ (or similar terms) you find posts on them but these posts are many years old. There has been interesting work done in the interim: here’s something on the whether the skills generalise, and something else of a training technique that not only demonstrably improves forecasting performance, but also has a handy mnemonic one could ‘try at home’. (The same applies to RQ: Sotala wrote a cool sequence on Stanovich’s ‘What intelligence tests miss’, but this is 9 years old. Stanovich has written three books since expressly on rationality, none of which have been discussed here as best as I can tell.)
I don’t understand, if there are multiple people who have spent >100 hours on the Project (broadly construed), why I don’t see there being a ‘lessons from the superforecasting literature’ write-up here (I am slowly working on one myself).
Maybe I just missed the memo and many people have kept abreast of this work (ditto other ‘relevant-looking work in academia’), and it is essentially tacit knowledge for people working on the Project, but they are focusing their efforts to develop other areas. If so, a shame this is not being put into common knowledge, and I remain mystified as to why the apparent neglect of these topics versus others: it is a lot easier to be sceptical of ‘is there anything there?’ for (say) circling, introspection/meditation/enlightenment, Kegan levels, or Focusing than for the GJP, and doubt in the foundation should substantially discount the value of further elaborations on a potentially unedifying edifice.
[Minor] I think the first para is meant to be block-quoted?
I know of a lot of people who continued studying and being interested in the forecasting perspective. I think the primary reason why there has been less writing from that is just that LessWrong was dead for a while, and so we’ve seen less writeups in general. (I also think there were some secondary factors that also contributed, but that the absence of a publishing platform was the biggest)
Also superforecasting and GJP are no longer new. Seems not at all surprising that most of the words written about them would be from when they were.
Given that the OP counts the Good Judgment project as part of the movement I think that certainly qualifies.
It’s my understanding that while the Good Judgment project made progress on the question of how to think about the right probability, we still lack ways for people to integrate the making of regular forecasts into their personal and professional lives.
I’ve been around that long. Or more. I was lurking before I commented.
In my efforts to apply rationality I ended up in post rationality. And ever upwards.