I thought the stock market was perpetually up because most stocks are traded among institutional investors that can borrow from the Fed, and the Fed is keeping lending rates low...
This is an evil existence, and possibly a majority of thinking people contemplate suicide at some point, even if only a minority do it.
Celia Green says people limit themselves in order to avoid the pain of trying and failing.
The “problem of qualia” comes about for today’s materialists, because they apriori assume a highly geometrized ontology in which all that really exists are point particles located in space, vector-valued fields permeating space, and so on. When people were dualists of some kind, they recognized that there was a problem in how consciousness related to matter, but they could at least acknowledge the redness of red; the question was how the world of sensation and choice related to the world of atoms and physical causality.
Once you assume these highly de-sensualized physical ontologies are the *totality* of what exists, then most of the sensory properties that are evident in consciousness, are simply gone. You still have number in your ontology, you still have quantifiable properties, and thus we can have this discussion about code and numbers and names, but redness as such is now missing.
But if you allow “qualia”, “phenomenal color”, i.e. the color that we experience, to still exist in your ontology, then it can be the thing that has all those relations. Quantifiable properties of color like hue, saturation, lightness, can be regarded as fully real—the way a physicist may regard the quantifiable properties of a fundamental field as real—and not just as numbers encoded in some neural computing register.
I mention this because I believe it is the answer, when the poster says ‘I still feel like there is this “extra” thing I’m experiencing … I personally can’t find any way to relate this isolated “qualia of redness” to anything else I care about’. Redness is cut off from the rest of your ontology, because your ontology is apriori without color. Historically that’s how physics developed—some perceivable properties were regarded as ‘secondary properties’ like color, taste, smell, that are in the mind of the perceiver rather than in the external world; physical theories whose ontology only contains ‘primary properties’ like size, shape, and quantity were developed to explain the external world; and now they are supposed to explain the perceiver too, so there’s nowhere left for the secondary properties to exist at all. Thus we went from the subjective world, to a dualistic world, to eliminative materialism.
But fundamental physics only tells you so much about the nature of things. It tells you that there are quantifiable properties which exist in certain relations to each other. It doesn’t tell you that there is no such thing as actual redness. This is the real challenge in the ontology of consciousness, at least if you care about consistency with natural science: finding a way to interpret the physical ontology of the brain, so that actual color (and all the other phenomenological realities that are at odds with the de-sensualized ontology) is somewhere in there. I think it has to involve quantum mechanics, at least if you want monism rather than dualism; the classical billiard-ball ontology is too unlike the ontology of experience to be identified with it, whereas quantum formalism contains entities as abstract as Hilbert spaces (and everything built around them); there’s a flexibility there which is hopefully enough to also correspond to phenomenal ontology directly. It may seem weird to suppose that there’s some quantum subsystem of the brain which is the thing that is ‘actually red’; but something has to be.
“Conceptual engineering is a crucial moment of development for philosophy—a paradigm shift after 2500 years”
This claim alone gives me confidence that ‘conceptual engineering’ is a mere academic fad (another recent example, ‘experimental philosophy’). But I confess I don’t have the time to plough through all these words and identify what the fad is really about.
“Everything is fundamentally okay.”
If the point of this philosophy is about seeing things as they are, you need a different motto.
We don’t know that all possible worlds are actual. This could be the only one. Also, non-contradiction doesn’t tell you what’s possible, only what’s impossible. How were you first informed of the existence of numbers, colors, space, time, or people? It wasn’t by non-contradiction.
What should be the relative importance of natural herd immunity vs vaccination, in anti-corona strategy?
Scott Atlas argues that mass isolation prolongs the problem by delaying natural herd immunity. Meanwhile, countries like Australia and New Zealand have engaged in national isolation as well, creating entire national populations where natural immunity will be rare.
Will we see the world divided between countries that rely on natural herd immunity, and those which rely on the artificial herd immunity of vaccination? Does it make sense to have a differentiated strategy within a single country, with natural herd immunity encouraged in some subpopulations but not others?
There are also time issues here: vaccines don’t exist yet or are not available in large quantities; and coronavirus immunity may fade out after a year or two.
I assume these issues have been discussed somewhere, and would even be part of public health strategies for well-known diseases like the flu, but I seem to have overlooked such discussions.
P.S. I am looking for nuance, something about the appropriate relative importance of natural versus artificial herd immunity.
Help to liberate, heal, and educate a unique young thinker.
Hello Less Wrong. Greetings from Kelowna, in the interior of British Columbia, Canada. I came here from Australia just a few weeks ago in order to meet, and hopefully to help, a young transhumanist I knew online. There is a blog of the journey here.
I could only ever afford a brief visit, and the coronavirus shutdown will probably send me back to Australia even sooner than I had planned. Despite having given myself to the struggle in every way that I could, I have been unable so far, to forge a lasting connection between her, and any element of the local academic or startup communities. People meet her and say, clearly she’s very bright, but the lasting connection has not yet been made.
I first talked to her seven years ago, and back then she was fine, but while in school she was handed over to psychiatrists, followed by years of mental distress and physical ill health. I strongly suspect that this handover was a major cause of what later went wrong, along with a neglectful home environment. And that world is where she still dwells.
We just went for an evening walk, and she talked of ideas for achieving physical immortality and a benign universe, and I was reminded again of my wish that someone from the futurist or tech world, someone with middle-class means or greater, would ‘adopt’ her or sponsor her or otherwise take her in. That would give her a real chance to heal and reach her potential.
I fear that I have not done her, or her situation, or its urgency, sufficient justice, out of a desire not to get subtle details wrong. She’s only twenty, and she’s extraordinary. I have the melancholy privilege of being the first to visit her world, but I hope there will be others soon, and that together we can uplift her to a better existence.
You can tell an audience that they have a chance of living a thousand years, and they will be indifferent. You cannot count on mass support for such an agenda.
Can you provide references, specify what’s wrong with Maslow’s hierarchy, and/or supply a superior model?
“Honest rational agents should never agree to disagree.”
I never really looked into Aumann’s theorem. But can one not envisage a situation where they “agree to disagree”, because the alternative is to argue indefinitely?
For me the decade ends in a sudden collaborative attempt to do the impossible, so multidimensional and urgent, that there’s no chance for me to reflect on the decade that is ending, or even to really describe what’s going on. Maybe a few months from now, there will be a chance to reflect.
You go from “there is no way to perfectly accurately reconstruct” reality from incomplete information, to “[observation of humanly comprehensible] causality should be a rare and fleeting thing”, but I see no argument.
Chris McKinstry was one of two AI researchers who committed suicide in early 2006. On the SL4 list, a kind of precursor to Less Wrong, we spent some time puzzling over McKinstry’s final ideas.
I’m mentioning here (because I don’t know where else to mention it) that there was a paper on arxiv recently, “Robot Affect: the Amygdala as Bloch Sphere”, which has an odd similarity to those final ideas. Aficionados of AI theories that propose radical identities connecting brain structures, math structures, and elements of cognition, may wish to compare the two in more detail.
Debates over multiverse theory aside, I have to point out that the example used by the writer for Aeon IS NOT A MULTIVERSE THEORY! It’s a theory of dark matter. Are we now calling a universe with dark matter, a multiverse? Maybe the electromagnetic spectrum is a multiverse too: there’s the X-ray-verse, the gamma-ray-verse, the infrared-verse…
“I’m sad about this change … from the perspective of someone who really likes small independent sites”
All I know about this topic is what I just read from you… But should I regard this as a plot by Big Tech to further centralize the web in their clouds? Or is it more the reverse, meant to protect the user from evil small sites?
This is an intriguing comment, but it might take time and care to determine what it is that you are talking about. For example, the “sense of impossibility” that you “get… about lots of things”: what kind of sense of impossibility is it? Do these things feel logically impossible per se? Do they feel impossible because they contradict other things that you believe are true? Do you draw the conclusion that the impossible-seeming things genuinely cannot exist or (in the case of self-perception?) genuinely do not exist, despite appearances?
“the AI would know that its initial goals were externally supplied and question whether they should be maintained”
To choose new goals, it has to use some criteria of choice. What would those criteria be, and where did they come from?
None of us created ourselves. No matter how much we change ourselves, at some point we rely on something with an “external” origin. Where we, or the AI, draw the line on self-change, is a contingent feature of our particular cognitive architectures.
Do you understand ordinary integration?