This approach allows you to share intuitions in a subject where you aren’t an expert but have read a few articles but it doesn’t allow you to share intuitions in a subject where you have actual expertise.
When it comes to seeking medical advise, a fellow rationalist has a easy time reading a handful of articles about the topic and forming their opinion based on those articles. If they have good source management, they can tell you about the articles. At the same time, they don’t have the same expertise that a doctor has. The intuition of the doctor comes from having spends years in medical school, their internship and treating patients.
I work in medical research and know many healthcare practitioners. They often share anonymized stories about their patients and higher level summaries of patterns they see across their patient population or in their institution.
I couldn’t learn to be a doctor from these occasional stories, but I understand the intimate details of their work much better than I would from articles, especially the social side.
For example, my geneticist friend’s complaints about companies selling unregulated genetic tests helped me understand why doctors are so much more conservative than researchers when it comes to new and unregulated medical tech. Researchers see developing new tests as innovation, doctors as often injecting more noise and confusion into an already overwhelming system.
That was a crucial insight for me as a biomedical researcher thinking about how to make a clinical impact.
Sometimes your beliefs can’t be traced to a few specific sources, I agree. You just have this complex world model formed by years of study, and you’re not sure what specific info is leading to your intuition. And it’s not like you can mind meld an entire medical degree. But if your opinion is really based on a deep, complex, irreducible expertise, you definitely can’t convince someone with a logical argument either, because that also won’t transmit your deep expertise to them. At that point, there’s not much you can do but either try your best to mind meld, or just move on.
But I think most online debates don’t have this problem. There’s usually some specific topic (e.g. circadian rhythm disorders) and even an expert should be able to trace which parts of their education and experience are most relevant and share them (e.g. a specific study, a specific patient, or a common experience they’ve had several times with patients).
I have had this problem a lot with consulting clients. I’ll start a project and be 90% sure within an hour or two of what the shape of the overall result will be, but it takes hundreds of hours of work to collect and present the information in a way that will (or should be) convincing to a third party. Partly it’s a matter of needing to articulate the source of intuitions. Partly it’s a matter of needing a few high-quality pieces of clear, unambiguous data, whereas the intuition is built on many individually ambiguous pieces of data.
But I think most online debates don’t have this problem.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.
This approach allows you to share intuitions in a subject where you aren’t an expert but have read a few articles but it doesn’t allow you to share intuitions in a subject where you have actual expertise.
When it comes to seeking medical advise, a fellow rationalist has a easy time reading a handful of articles about the topic and forming their opinion based on those articles. If they have good source management, they can tell you about the articles. At the same time, they don’t have the same expertise that a doctor has. The intuition of the doctor comes from having spends years in medical school, their internship and treating patients.
I work in medical research and know many healthcare practitioners. They often share anonymized stories about their patients and higher level summaries of patterns they see across their patient population or in their institution.
I couldn’t learn to be a doctor from these occasional stories, but I understand the intimate details of their work much better than I would from articles, especially the social side.
For example, my geneticist friend’s complaints about companies selling unregulated genetic tests helped me understand why doctors are so much more conservative than researchers when it comes to new and unregulated medical tech. Researchers see developing new tests as innovation, doctors as often injecting more noise and confusion into an already overwhelming system.
That was a crucial insight for me as a biomedical researcher thinking about how to make a clinical impact.
Sometimes your beliefs can’t be traced to a few specific sources, I agree. You just have this complex world model formed by years of study, and you’re not sure what specific info is leading to your intuition. And it’s not like you can mind meld an entire medical degree. But if your opinion is really based on a deep, complex, irreducible expertise, you definitely can’t convince someone with a logical argument either, because that also won’t transmit your deep expertise to them. At that point, there’s not much you can do but either try your best to mind meld, or just move on.
But I think most online debates don’t have this problem. There’s usually some specific topic (e.g. circadian rhythm disorders) and even an expert should be able to trace which parts of their education and experience are most relevant and share them (e.g. a specific study, a specific patient, or a common experience they’ve had several times with patients).
I have had this problem a lot with consulting clients. I’ll start a project and be 90% sure within an hour or two of what the shape of the overall result will be, but it takes hundreds of hours of work to collect and present the information in a way that will (or should be) convincing to a third party. Partly it’s a matter of needing to articulate the source of intuitions. Partly it’s a matter of needing a few high-quality pieces of clear, unambiguous data, whereas the intuition is built on many individually ambiguous pieces of data.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.