Actually the superdeterminism models allow for both to be true. There is a different assumption that breaks.
The standard process is scope->effort->schedule. Estimate the scope of the feature or fix required (usually by defining requirements, writing test cases, listing impacted components etc.), correct for underestimating based on past experience, evaluate the effort required, again, based on similar past efforts by the same team/person. Then and only then you can figure out the duty cycle for this project, and estimate accordingly. Then double it, because even the best people suck at estimating. Then give the range as your answer if someone presses you on it. “This will be between 2 and 4 weeks, given these assumptions. I will provide updated estimates 1 week into the project.”
Not surprisingly, I have a few issues with your chain of reasoning.
1. I exist. (Cogito, ergo sum). I’m a thinking, conscious entity that experiences existence at this specific point in time in the multiverse.
Cogito is an observation. I am not arguing with that one. Ergo sum is an assumption, a model. The “multiverse” thing is a speculation.
Our understanding of physics is that there is no fundamental thing that we can reduce conscious experience down to. We’re all just quarks and leptons interacting.
This is very much simplified. Sure, we can do reduction, but that doesn’t mean we can do synthesis. There is no guarantee that it is even possible to do synthesis. In fact, there are mathematical examples where synthesis might not be possible, simply because the relevant equations cannot be solved uniquely. I made a related point here. Here is an example. Consciousness can potentially be reduced to atoms, but it may also be reduced to bits, a rather different substrate. Maybe there are other reductions possible.
And it is also possible that constructing consciousness out of quarks and leptons is impossible because of “hard emergence”. Of the sorites kind. There is no atom of water. A handful of H2O molecules cannot be described as a solid, liquid or gas. A snowflake requires trillions of trillions of H2O molecules together. There is no “snowflakiness” in a single molecule. Just like there is no consciousness in an elementary particle. There is no evidence for panpsychism, and plenty against it.
“Getting out of bed in the morning” and “caring about one’s friends” turn out to be useful for more reasons than Jehovah—but their derivation in the mind of that person was entangled with Jehovah.
Cf: “Learning rationality” and “Hanging out with like-minded people” turn out to be useful for more reasons than AI risk—but their derivation in the mind of CFAR staff is entangled with AI risk.
That… doesn’t seem like a self-consistent decision theory at all. I wonder if any CDT proponents agree with your characterization.
causation might be in the map rather than the territory
Of course it is. There is no atom of causation anywhere. It’s a tool for embedded agents to construct useful models in an internally partially predictable universe.
“Backward causation” may or may not be a useful model at times, but it is certainly nothing but a model.
As a trained (though not practicing) physicist, I can see that you are making a large category error here. Relativity neither adds to not subtracts from the causation models. In a deterministic Newtonian universe you can imagine backward causation as a useful tool. Sadly, its usefulness it rather limited. For example, the diffusion/heat equation is not well posed when run backwards, it blows up after a finite integration time. An intuitive way to see that is that you cannot reconstruct the shape of a glass of water from the puddle you see on the ground some time after it was spilled. But in cases where the relevant PDEs are well posed in both time directions, backward causality is equivalent to forward causality, if not computationally, then at least in principle.
All that special relativity gives you is that the absolute temporal order of events is only defined when they are within a lightcone, not outside of it. General relativity gives you both less and more. On the one hand, the Hilbert action is formulated without referring to time evolution at all and poses no restriction on the type of matter sources, be they positive or negative density, subluminal or superluminal, finite or singlular. On the other hand, to calculate most interesting things, one needs to solve the initial value problem, and that one poses various restrictions on what topologies and matter sources one can start with. On the third hand, there is a lot of freedom to define what constitutes “now”, as many different spacetime foliations are on equal footing.
If you add quantum mechanics to the mix, the Born rule, needed to calculate anything useful regardless of one’s favorite interpretation, breaks linearity and unitarity at the moment of interaction (loosely speaking) and is not time-reversal invariant.
The entropic argument is also without merit: there is no reason to believe that entropy would decrease in a “high-entropy world”, whatever that might mean. We do not even know how observer-independent entropy is (Jaynes argued that apparent entropy depends on the observer’s knowledge of the world).
Basically, you are confusing map and territory. If backward causality helps you make more accurate maps, go wild, just don’t claim that you are doing anything other than constructing models.
Omega will predict their action, and compare this to their actual action. If the two match...
For a perfect predictor the above simplifies to “lose 1 utility”, of course. Are you saying that your interpretation of EDT would fight the hypothetical and refuse to admit that perfect predictors can be imagined?
It seems almost tautologically true that you can’t accurately predict what an agent will do without actually running the agent. Because, any algorithm that accurately predicts an agent can itself be regarded as an instance of the same agent.
That seems manifestly false. You can figure out whether an algorithm halts or not without being accidentally stuck in an infinite loop. You can look at the recursive Fibonacci algorithm and figure out what it would do without ever running it. So there is a clear distinction between analyzing an algorithm and executing it. If anything, one would know more about the agent by using the techniques from analysis of algorithms than the agent would ever know about themselves.
30 seconds of googling gave me this link, which might not be anything exceptional but at least it offers a couple of relevant definitions:
what should I do, given that I don’t know what I should do?
what should I do when I don’t know what I should do?
and later a more focused question
what am I (or we) permitted to do, given that I (or we) don’t know what I (or we) are permitted to do
At least they define what they are working on...
What do we mean by “moral uncertainty”?
I was looking for a sentence like “We define moral uncertainty as …” and nothing came up. Did I miss something?
Suppose a universe is made up of 16 quantum particles each of which has two states: 0 and 1. In this sense, the entire universe is just a number like 0b0000000000000000.
Well, if your universe is just two states, its description in the eigenstate basis would be something like A1 exp(iE1 t)|1> + A2 exp(iE2 t), where A1 and A2 are complex and E1 and E2 are real (modulo normalization and phase). I am not sure how this maps into a finite length binary number.
Sorry, my spam filter ate your reply notification :(
To “dissolve” the math invented/discovered question, it’s a false dichotomy, as constructing mathematical models, conscious or subconscious, is constructing the natural transformations between categories that allow high “compression ratio” of models of the world. They are as much “out there” in the world as the compression would allow. But they are not in some ideal Platonic world separate from the physical one. Not sure if this makes sense.
wouldn’t a presupposition of having abstraction as natural transformation presuppose the existence of abstraction to define itself?
There might be a circularity, but I do not see one. The chain of reasoning is, as above:
1. There is a somewhat predictable world out there
2. There are (surjective) maps from the world to its parts (models)
3. There are commonalities between such maps such that the procedure for constructing one map can be applied to another map.
4, These commonalities, which would correspond to natural transformations in the CT language, are a way to further compress the models.
5. To an embedded agent these commonalities feel like mathematical abstractions.
I do not believe I have used CT to define abstractions, only to meta-model them.
Right, that’s the question. Sure, it is easy to state that “metric must be a faithful representation of the target”, but it never is, is it? From the point of view of double inversion, optimizing the target is a hard inverse problem, because, like in your pizza example, the true “values” (pizza is a preference on the background of an otherwise balanced diet) is not easily observable. What would be a double inverse in this case? Maybe something like trying various amounts of pizza and getting the feedback on enjoyment? That would match the long division pattern. I’m not sure.
I am not sure how this leads to panpsychism. What are the logical steps there?
“Why do I think reality exists?”
Is already answerable. You can list a number of reasons why you hold this belief. You are not supposed to dissolve the new question, only reformulate the original one in a way that is becomes answerable.
why ANY process “feels” anything at all
Is harder because we do not have a good handle on what physical process creates feelings, or in Dennett’s approach, how do feelings form. But at least we know what kind of research needs to be conducted in order to make progress in that area. In that way the question is answerable, at least in principle, we are just lacking the good understanding of how human brain works. So the question is ultimately about the neuroscience and the algorithms.
But the hard problem of consciousness is one of the unique exceptions because it deals with subjective experience, specifically why we have subjective experience at all. (It is, in fact, a variant of the first-cause problem.)
That’s the “dangling unit” (my grade 8 self says “lol!” at the term) Eliezer was talking about. There are no “unique exceptions”, we are algorithms, and some of the artifacts of running our algorithms is to generate “feelings” or “qualia” or “subjective experiences”. If this leaves you saying “but… but… but...”, then the next quote from Eliezer already anticipates that:
This dangling unit feels like an unresolved question, even after every answerable query is answered. No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you’re left wondering: “But does the falling tree really make a sound, or not?”
When in doubt, follow the Golden Rule.
I’ve read through the whole Quantum Physics Sequence once or twice, and whenever Eliezer talks about actual science, it is popularized, but not wrong. Some parts are explained really nicely, too. Unfortunately, those are the parts that are also irrelevant to learning rationality, the whole impetus for Eliezer writing the sequence. And the moment he goes into MWI apologia, for lack of a better word, it all goes off the rails, there is no more science, just persuasion. To be fair, he is not alone in that. Sean Carroll, an excellent physicist from whose lecture notes I had learned General Relativity, has published a whole book pushing the MWI onto the unsuspecting public.
One area where the Quantum Physics sequence is useful for rationality is exposing how weird and counter-intuitive the world is, and feeling humbled about one’s own stated and unstated wrong assumptions and conclusions, something we humans are really bad at. Points like “All electrons are the same. This one here and that one there” “Actually, there are no electrons, just fields that sometimes look like electrons”.
Where the sequence fails utterly in my view is the pseudo-scientific discussions about “world thickness” and the fictional narratives about it.
And even simpler summary in a follow-up post Righting a Wrong Question:
When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.
“Why do I have free will?”
“Why do I think I have free will?”
This reminds me of the Eliezer’s classic post Dissolving the Question.
From your post:
The “hard problem of consciousness” is “why is there an experience of consciousness; why does information processing feel like anything at all?”
The “meta-problem of consciousness” is “why do people believe that there’s a hard problem of consciousness?”
From Eliezer’s post:
Your assignment is not to argue about whether people have free will, or not.
Your assignment is not to argue that free will is compatible with determinism, or not.
Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.
You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X. Such are mere attempts to explain why people believe in “free will”, not explain how.
Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.
Is there anything else to the book you review beyond what Eliezer captured back 12 years ago?
Looking for “functions that don’t exhibit Goodhart effects under extreme optimization” might be a promising area to look into. What does it mean for a function to behave as expected under extreme optimization? Can you give a toy example?