Nice move with the lyrical section titles.
torekp
There’s a lot of room in between fully integrated consciousness and fully split consciousness. The article seems to take a pretty simplistic approach to describing the findings.
Here’s another case of non-identity, which deserves more attention: having a child. This one’s not even hypothetical. There is always a chance to conceive a child with some horrible birth defect that results in suffering followed by death, a life worse than nothing. But there is a far greater chance of having a child with a very good life. The latter chance morally outweighs the former.
Well, unless you’re an outlier in rumination and related emotions, you might want to consider how the evolutionary ancestral environment compares to the modern one. It was healthy in the former.
The linked paper is only about current practices, their benefits and harms. You’re right though, about the need to address ideal near-term achievable biofuels and how they stack up against the best (e.g.) near-term achievable solar arrays.
I got started by Sharvy, It aint the meat its the motion, but my understanding was Kurzweil had something similar first. Maybe not. Just trying to give the devil his due.
I’m convinced by Kurzweil-style (I think he originated them, not sure) neural replacement arguments that experience depends only on algorithms, not (e.g.) the particular type of matter in the brain. Maybe I shouldn’t be. But this sub-thread started when oge asked me to explain what the implications of my view are. If you want to broaden the subject and criticize (say) Chalmers’s Absent Qualia argument, I’m eager to hear it.
You seem to be inventing a guarantee that I don’t need. If human algorithms for sensory processing are copied in full, the new beings will also have most of their thoughts about experience caused by experience. Which is good enough.
Mentioning something is not a prerequisite for having it.
I’m not equating thoughts and experiences. I’m relying on the fact that our thoughts about experiences are caused by those experiences, so the algorithms-of-experiences are required to get the right algorithms-of-thoughts.
I’m not too concerned about contradicting or being consistent with GAZP, because its conclusion seems fuzzy. On some ways of clarifying GAZP I’d probably object and on others I wouldn’t.
I think in order to make more progress on this, an extensive answer to the whole blue minimizing robot sequence would be a way to go. A lot of effort seems to be devoted to answering puzzles like: the AI cares about A; what input will cause it to (also/only) care about B? But this is premature if we don’t know how to characterize “the AI cares about A”.
It depends how the creatures got there: algorithms or functions? That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed. Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they’re conscious of something, but we have no idea what that may be like.
This. And if one is willing to entertain Tegmark, approximately 100% of universes will be non-empty, so the epistemic question “why a non-empty universe?” gets no more bite than the ontological one.
The author is overly concerned about whether a creature will be conscious at all and not enough concerned about whether it will have the kind of experiences that we care about.
Can you please clarify “our reference class”? And are you using some form of Self-Sampling Assumption?
Belated thanks to you and MrMind, these answers were very helpful.
Can someone sketch me the Many-Worlds version of what happens in the delayed choice quantum eraser experiment? Does a last-minute choice to preserve or erase the which-path information affect which “worlds” decohere “away from” the experimenter? If so, how does that go, in broad outline? If not, what?
Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world
What is the notion of “includes” here? Edit: from pp 4-5:
This means that a superintelligent machine could simulate the behavior of an arbitrary Turing machine on arbitrary input, and hence for our purpose the superintelligent machine is a (possibly identical) super-set of the Turing machines. Indeed, quoting Turing, “a man provided with paper, pencil, and rubber, and subject to strict discipline, is in effect a universal machine”
Let’s start with an example: my length-in-meters, along the major axis, rounded to the nearest integer, is 2. In this statement “2”, “rounded to nearest integer”, and “major axis” are clearly mathematical; while “length-in-meters” and “my (me)” are not obviously mathematical. The question is how to cash out these terms or properties into mathematics.
We could try to find a mathematical feature that defines “length-in-meters”, but how is that supposed to work? We could talk about the distance light travels in 1 / 299,792,458 seconds, but now we’ve introduced both “seconds” and “light”. The problem (if you consider non-mathematical language a problem) just seems to be getting worse.
Additionally, if every apparently non-mathematical concept is just disguised mathematics, then for any given real world object, there is a mathematical structure that maps to that object and no other object. That seems implausible. Possibly analogous, in some way I can’t put my finger on: the Ugly Duckling theorem.
Likewise, there may not be any agenty dust in the universe. But if your implied conclusion is that there are no agents in the universe, then your conclusion is false.
This. I call the inference “no X at the microlevel, therefore, no such thing as X” the Cherry Pion fallacy. (As in, no cherry pions, implies no cherry pie.) Of course more broadly speaking it’s an instance of the fallacy of composition, but, this variety seems to be more tempting than most, so it merits its own moniker.
It’s a shame. The OP begins with some great questions, and goes on to consider relevant observations like
When we are sad, we haven’t attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we’ve attributed the cause of the event to the actions of another agent.
But from there, the obvious move is one of charitable interpretation, saying, Hey! Responsibility is declared in these sorts of situations, when an agent has caused an event that wouldn’t have happened without her, so maybe, “responsibility” means something like “the agent caused an event that wouldn’t have happened without her”. Then one could find counterexamples to this first formulation, and come up with a new formulation that got the new (and old) examples right … and so on.
Mostly it’s no-duh, but the article seems to set up a false contrast between justification in ethics, and life practice. But large swaths of everyday ethical conversation are justificatory. This is a key feature that the philosopher needs to respect.