But I think most online debates don’t have this problem.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.
To the extend that this is true, it’s often because the participants in most online debates don’t really know what they are talking about. It’s just becomes a problem for those people who want to participate in online debates who know what they are talking about.
I’m right now writing a book that’s partly about fascia and I have plenty of intuitions about what’s true. Frequently when having an intuition about what’s true, I’m not referring to my person experience but ask ChatGPT to do background research on the issue in question and then use studies that document the facts I’m pointing toward that aren’t the reason why I formed my opinion in the first place. Frequently, engaging with the studies that it brings up also makes me refine my position.
I had an online discussion with someone who mistakenly thought that LLMs need to be specifically taught to translate between languages and don’t pick up the ability to translate if you just feed them a corpus of text in both languages that does not include direct translations.
Explaining my intuition for why LLMs can do such translation is quite complex. It’s much easier to go to ChatGPT and ask for studies that demonstrate the LLMs are able to pick up that ability to translate and make my argument that way.
A good part of what science is about is people having some understanding of the world and needing to build a logical argument that backs up their insight about the world so that other people accept that insight.