I’m referring especially to the parts of psychology that consisted of people just making up whatever sounded good.
I quite obviously don’t think that they’re wrong.
Your focus on ontology and meta-ontology is interesting, could you explain more how it’s related to friendliness?
It seems that a large part of what makes Steve Rayhawk so awesome is that he can make insightful connections between disparate fields by way of reasoning about them in the terms of a larger and consistent framework. Same goes for e.g. Michael Vassar and Peter de Blanc. That said, it’s probable that their ontologies don’t carve reality at its joints in the way that would be most conducive to reasoning about Friendliness… and most rationalists I talk to just seem to lack a coherent ontology entirely, which makes it damn hard to propagate belief updates between domains, and hard to see potential patterns or hypotheses that suggest themselves. (Think of the state of what should have been known as evolutionary biology, before Darwin discovered it.) It seems like it’d be useful to better understand what’s going into how they managed to construct their ontologies (and metaontologies). It’s also confusing that ontology has become so tied up with algorithmic probability theoretic cosmology and what not. Meanwhile we’re still using words like ‘reality fluid’ while trusting our Occamian intuitions about which ontologies are elegant.
I, for one, have never in my life used the words “reality fluid.”
Well, now I have. :D
I quite obviously don’t think that they’re wrong.
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
I think you partly mean different things by “wrong”. Two contradictory models can each make lots of reliably correct predictions or find lots of worthwhile insights, even if one or both make false fundamental assumptions or ontological claims. (It’s easy to focus on supernatural ontological claims as falsifying a model, but they usually don’t invalidate, or have much effect on, its predictions (though they do hold back expansion and integration of models).)
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too)
I suspect you and Will have different definitions of “wrong”. It seems obvious that, even if two theories are mutually exclusive taken as wholes, each one could contain some unique useful observations and concepts (even if one or both theories make some dead-wrong assumptions or false claims of ontological specialness).
It seems that a large part of what makes Steve Rayhawk so awesome is that he can make insightful connections between disparate fields by way of reasoning about them in the terms of a larger and consistent framework.
Can you give some examples of this, or maybe even write a post on the topic? I’m still really fuzzy as to what you’re talking about.
I quite obviously don’t think that they’re wrong.
It seems that a large part of what makes Steve Rayhawk so awesome is that he can make insightful connections between disparate fields by way of reasoning about them in the terms of a larger and consistent framework. Same goes for e.g. Michael Vassar and Peter de Blanc. That said, it’s probable that their ontologies don’t carve reality at its joints in the way that would be most conducive to reasoning about Friendliness… and most rationalists I talk to just seem to lack a coherent ontology entirely, which makes it damn hard to propagate belief updates between domains, and hard to see potential patterns or hypotheses that suggest themselves. (Think of the state of what should have been known as evolutionary biology, before Darwin discovered it.) It seems like it’d be useful to better understand what’s going into how they managed to construct their ontologies (and metaontologies). It’s also confusing that ontology has become so tied up with algorithmic probability theoretic cosmology and what not. Meanwhile we’re still using words like ‘reality fluid’ while trusting our Occamian intuitions about which ontologies are elegant.
I, for one, have never in my life used the words “reality fluid.”
Well, now I have. :D
You’ve got things on your list that are mutually exclusive (Jung and Freud being the most glaring example to me, but any almost any science and “Chakras” would work too), so it’s pretty sang safe to say that a number of things on your list are wrong.
No, you mentioned them.
Pah, a trifle.
I think you partly mean different things by “wrong”. Two contradictory models can each make lots of reliably correct predictions or find lots of worthwhile insights, even if one or both make false fundamental assumptions or ontological claims. (It’s easy to focus on supernatural ontological claims as falsifying a model, but they usually don’t invalidate, or have much effect on, its predictions (though they do hold back expansion and integration of models).)
I suspect you and Will have different definitions of “wrong”. It seems obvious that, even if two theories are mutually exclusive taken as wholes, each one could contain some unique useful observations and concepts (even if one or both theories make some dead-wrong assumptions or false claims of ontological specialness).
Can you give some examples of this, or maybe even write a post on the topic? I’m still really fuzzy as to what you’re talking about.