Sometimes your beliefs can’t be traced to a few specific sources, I agree. You just have this complex world model formed by years of study, and you’re not sure what specific info is leading to your intuition. And it’s not like you can mind meld an entire medical degree. But if your opinion is really based on a deep, complex, irreducible expertise, you definitely can’t convince someone with a logical argument either, because that also won’t transmit your deep expertise to them. At that point, there’s not much you can do but either try your best to mind meld, or just move on.
But I think most online debates don’t have this problem. There’s usually some specific topic (e.g. circadian rhythm disorders) and even an expert should be able to trace which parts of their education and experience are most relevant and share them (e.g. a specific study, a specific patient, or a common experience they’ve had several times with patients).
I have had this problem a lot with consulting clients. I’ll start a project and be 90% sure within an hour or two of what the shape of the overall result will be, but it takes hundreds of hours of work to collect and present the information in a way that will (or should be) convincing to a third party. Partly it’s a matter of needing to articulate the source of intuitions. Partly it’s a matter of needing a few high-quality pieces of clear, unambiguous data, whereas the intuition is built on many individually ambiguous pieces of data.
But I think most online debates don’t have this problem.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.
Sometimes your beliefs can’t be traced to a few specific sources, I agree. You just have this complex world model formed by years of study, and you’re not sure what specific info is leading to your intuition. And it’s not like you can mind meld an entire medical degree. But if your opinion is really based on a deep, complex, irreducible expertise, you definitely can’t convince someone with a logical argument either, because that also won’t transmit your deep expertise to them. At that point, there’s not much you can do but either try your best to mind meld, or just move on.
But I think most online debates don’t have this problem. There’s usually some specific topic (e.g. circadian rhythm disorders) and even an expert should be able to trace which parts of their education and experience are most relevant and share them (e.g. a specific study, a specific patient, or a common experience they’ve had several times with patients).
I have had this problem a lot with consulting clients. I’ll start a project and be 90% sure within an hour or two of what the shape of the overall result will be, but it takes hundreds of hours of work to collect and present the information in a way that will (or should be) convincing to a third party. Partly it’s a matter of needing to articulate the source of intuitions. Partly it’s a matter of needing a few high-quality pieces of clear, unambiguous data, whereas the intuition is built on many individually ambiguous pieces of data.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.