I, for one, have never in my life used the words “reality fluid.”
Well, now I have. :D
I quite obviously don’t think that they’re wrong.
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
I think you partly mean different things by “wrong”. Two contradictory models can each make lots of reliably correct predictions or find lots of worthwhile insights, even if one or both make false fundamental assumptions or ontological claims. (It’s easy to focus on supernatural ontological claims as falsifying a model, but they usually don’t invalidate, or have much effect on, its predictions (though they do hold back expansion and integration of models).)
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too)
I suspect you and Will have different definitions of “wrong”. It seems obvious that, even if two theories are mutually exclusive taken as wholes, each one could contain some unique useful observations and concepts (even if one or both theories make some dead-wrong assumptions or false claims of ontological specialness).
I, for one, have never in my life used the words “reality fluid.”
Well, now I have. :D
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
No, you mentioned them.
Pah, a trifle.
I think you partly mean different things by “wrong”. Two contradictory models can each make lots of reliably correct predictions or find lots of worthwhile insights, even if one or both make false fundamental assumptions or ontological claims. (It’s easy to focus on supernatural ontological claims as falsifying a model, but they usually don’t invalidate, or have much effect on, its predictions (though they do hold back expansion and integration of models).)
I suspect you and Will have different definitions of “wrong”. It seems obvious that, even if two theories are mutually exclusive taken as wholes, each one could contain some unique useful observations and concepts (even if one or both theories make some dead-wrong assumptions or false claims of ontological specialness).