You’re leaning heavily on transitivity. In areas, such as maths where identity can be defined precisely transitivity holds: If a=b and b=c, then a=c. But it doesn’t have to where identity is fuzzy. If I define identity99 as “a is 99% similar to b”, then “a is identical99 to b” and “b is identical99 to c” don’t imply “a is identical99 to c”.
TAG
But philosophy of mind seems to be different.[12] There’s no common term for applied philosophy of mind.
Not psychology?
If people (like Musk) are continually successful, you know they’re doing something right. One-off sucess can be survivorship bias, but the odds of having continued success by mere happenstance get very low, very quickly
Unless success breeds more success, irrespective of other factors.
Politics is the mind killer
Is a special case of “tribalism is the mind killer”
I predict that these people can be accurately modeled as status maximizers
Whereas your ingroup must be something different, because ingroups and outgroups never have anything in common.
it should not be surprising that a website for bullying is full of bullies.
Ditto.
The whole point of dark matter is to hold galaxies together through gravity. And it is posited as having exotic properties apart from gravity.
Giving precise forecasts would give people who are invested in AI progress a chance to dunk on him and undermine his credibility by being able to point out precisely when and how he was wrong, while neglecting the gigantic consequences if they themselves are wrong about the consequences of continued AI capabilities research. Until the incentives he faces chance, I expect his behavior to remain roughly the same.
But then anyone who makes a precise bet could lose out in the same way. I assume you don’t believe that getting in general is wrong, so where does the asymmetry come from? Yudkowsky is excused betting because he’s actually right?
There was a lot of other stuff in that debate.
Suppose they write an interesting critique of lesswrong and post it on lesswrong. Would you welcome that?
Your outgroup is not homogeneous, it just seems that way.
Using a noun is, by default, reification. Or, at the very least, should be presumed so in the absence of some statement along the lines of “of course when I’m asking you to agree that people have qualia, I am not asking you to commit yourself to there being any such things as qualia”.
I’ve already said that I’m using “qualia” in an ontologically non committal way.
I note from your 2016 comment that you use the word noncommittally yourself.
“Qualia are what happens in our brains (or our immaterial souls, or wherever we have experiences) in response to external stimulation, or similar things that arise in other ways (e.g., in dreams).”
Qualia without reification seem to me to amount to “people have experiences”.
As I have explained, equating qualia and experiences doesn’t sufficiently emphasise the subjective aspects.
“Experience” can be used in contexts like “experience a sunset” where the thing experienced is entirely objective, or contexts like “experience existential despair” ,where it’s a subjective feeling. Only the second kind of use overlaps with “qualia”. Hence, “qualia” is often briefly defined as “subjective experience”.
Note that “experience” is just as much of a noun as “quale”, so it has just as much of reification issue.
None.
I am still trying to avoid needless reification,
Then dont reify. The reification issue exists only in your imagination.
I understand that it doesn’t seem that way to you, but I don’t understand why; I don’t yet understand just what you mean by “qualia”,
How do you know it’s different from what you mean? You were comfortable using the word in 2016. This conversation started when I used a series of examples to define “qualia”, which you objected to as not being a real definition.
“It’s easy to give examples of things we think of as qualia. I’m not so sure that that means it’s easy to give a satisfactory definition of “qualia”.′
But when I asked you to define “matter”...you started off with a listof examples!
“First, purely handwavily and to give some informal idea of the boundaries, here are some things that I would call “matter” and some possibly-similar things that I would not. Matter: electrons, neutrons, bricks, stars, air, people, the London Philharmonic Orchestra (considered as a particular bunch of particular people). Not matter: photons, electric fields, empty space (to whatever extent such a thing exists), the London Philharmonic Orchestra (considered as a thing whose detailed composition changes over time), the god believed in by Christians (should he exist), minds. Doubtful: black holes; the gods believed in by the ancient Greeks (should they exist).”
The only thing I’m doing that is different is going for a minimal and common sense approach, rather than a technical definition on the lines of “that which is ineffable, incorrigible, irreducible and repeatable. Hence why the list of examples: it’s hard to deny that ones pains feel like something, even when one can quibble about incorrigibility or whatever.
and the one thing you’ve said that seems to be an attempt to explain why you want something that goes beyond “people have experiences” in the direction you’re calling “qualia”—the business about perception being a complex multi-stage process involving filtering and processing and whatnot—didn’t help me, for the reasons I’ve already given.
Again, that would be an ontology of qualia. Again, I am offering a definition, not a complete theory. Again, you shouldn’t be rejecting evidence because you don’t like it’s theoretical implications.
Splendid! I believe, like Alice, that people have experience. I am not a naïve realist.
Naive realism is not the denial of experience: it’s treating experience as objective.
At least, I think I am not a naïve realist. But—I’m sorry if this seems to be becoming a theme—I don’t really understand exactly what you mean by “naïve realist”
You can look up definitions, just as you can for “qualia”.
so maybe I’m wrong in thinking I’m not one. At any rate, I agree with what you said in apparent criticism of naïve realism, namely that when we perceive things it happens by means of a complicated process in which lots of things happen along the way.)
Which have an objective aspect -- things happen differently in the brains of different perceivers—and a subjective aspect—things seem different to different observers. Again, the subjective aspect is what’s relevant.
It occurs to me that maybe you’re taking “things” in “people experience things” to be e.g. sunsets, and “people experience things” to mean something like “a straightforward X-experiences-Y relation holds between people and sunsets”, which might explain why you are accusing me of naïve realism. That isn’t what I mean; the difficulty I am having here is that the language we have available for talking about this stuff is full of implicit reifications
No, it just seems to you that way.
they [sc. qualia] are supposed to be subjectively experienced ways of perceiving
Well, for sure I agree that perceiving happens, and that how that feels to us is subjective (because how anything feels to us is subjective, because that’s what “subjective” means).
No, it doesn’t mean anything so vacuous. If two people perform mental arithmetic , that is not subjective because maths is objective...they get the same answer, or one of them is wrong. “Subjective” doesn’t just mean that individual apprehensions vary, it means there is no right or wrong about the variation Some people like the way marmite tastes to them, others don’t. Neither is right or wrong, but the marmite is always the exact same substance.
Does that mean that I agree with what you mean when you say “we have qualia”, or not?
Well, you seem to be having trouble understanding what “subjective” means.
An optimizer is a very advanced meta-learning algorithm that can learn the rules of (effectively) any environment and perform well in it. It’s general by definition.
A square circle is square and circular by definition, but I still don’t believe in them. There has to be a trade off between generality and efficiency.
t can keep modeling things at the level of atoms; or, it can dump the overwhelming majority of that information, collapse sufficiently large objects into point masses, and use Cowell’s method.
Once it has dumped the overwhelming majority of the information, it is no longer general. It’s not (fully) general and (fully) efficient.
If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences.
To some extent. Minimally it can be grounded in our preference not to be punished. Less minimally, but not maximally, it can be grounded in negative preferences , like ” I don’t want to be killed” without being grounded in positive preferences like * “I prefer Tutti Frutti”. In either case, you dont need a detailed.picture of human preference to solve morality, if you haven’t first shown that all preferences are relevant.
What does “optimizer” mean? You’re implying that it has something to do with efficiency .. but also something to do with generality..?
ALL people initially perceive morality as something objective, but as your preferences, so they may even wonder “does something become right simply because someone wants it?”
Was that supposed to read “as something objective, but ALSO as your preferences”.
That there are massive quantities of invisible matter in the universe that only interacts via gravitation? And happens to be spread around in about the same density distribution as all the regular matter?
Your second sentence is a pretty straightforward consequence of your first.
Local decoherence with global coherence is hardly many worlds. Global decoherence with local coherence would be a much better fit.
Macroscopic decoherence, a.k.a. many-worlds, was first proposed in a 1957 paper by Hugh Everett III
No, he didn’t call it “many worlds”, and he didn’t base it on decoherence.
Consciousness depends on the pattern, which, in turn, depends on physics
But obviously not in the sense I meant.if I meant in that sense I wouldn’t be disagreeing with you.
Many minds is a thing
https://en.wikipedia.org/wiki/Many-minds_interpretation
...But appealing to special properties of of observers is one of the main things many worlders are trying to getaway from them.
Induction, as the prediction of observations without necessarily having an explanation of the regularity, works just fine. The anti induction argument is purely against induction as a source of hypotheses or explanations. Everyone has given up on that idea, and the pro induction people don’t even use the word that way. There is a lot of talking-past bere.