Not an expert, but they seem to be less useful in a conflict with a nuclear power that possesses something like this: https://en.wikipedia.org/wiki/Kh-47M2_Kinzhal
If you do travel back to the past though, you may find yourself travelling along a different timeline after that
No, that’s not how it works. That’s not how any of this works. If you are embedded in a CTC, there is no changing that. There is no escaping the groundhog day, or even realizing that you are stuck in one. You are not Bill Murray, you are an NPC.
And yes, our universe is definitely not a Godel universe in any way. The Godel universe is isotropic and stationary, while our universe is of the FRW-de Sitter type, the best we can tell.
More generally, knowledge about the system, or memory, as well as the ability to act upon it to rearrange information. In fact, if an agent has perfect knowledge of a system, it can rearrange it in any way it desires.
Indeed, but it would not be an embedded agent, but something from outside the Universe, at which point you might as well say “God/Simulator/AGI did it” and give up.
if we assume our universe is a causal loop, but it is not a CTC
That is incompatible with classical GR, the best I can glean. The philosophy paper is behind a paywall (boo!), and it’s by a philosopher, not a physicist, apparently, so can be safely discounted (this attitude goes both ways, of course).
From that point on in your post, it looks like you are basically throwing **** against the wall and seeing what sticks, so I stopped trying to understand your logic.
Life doesn’t just veer off the rails into oblivion; it’s locked on a path, or lots of equivalent paths that are all destined to tell the same story — the same universal archetype. The loop cannot be broken, else it would have never existed. Life is bound to persist, bound to overcome, bound to exist again
To quote the classic movie, “Life, uh, finds a way”. Which is a nice and warm sentiment, but nothing more.
But, if your goal is a search for God, then 10⁄10 for rationalization.
When physicists were figuring out quantum mechanics, one of the major constraints was that it had to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well—i.e. most of the macroscopic world.
Well, that’s false. The details of quantum to classical transition are very much an open problem. Something happens after the decoherence process removes the off-diagonal elements from the density matrix, and before only a single eigenvalue remains; the mysterious projection postulate. We have no idea at what scales it becomes important and in what way. The original goal was to explain new observations, definitely. But it was not “to reproduce classical mechanics in all of the situations where we already knew that classical mechanics works well”.
Your other examples is more in line with what was going on, such as
for special and general relativity—they had to reproduce Galilean relativity and Newtonian gravity, respectively, in the parameter ranges where those were known to work
That program worked out really well. But that is not a universal case by any means. Sometimes new models don’t work in the old areas at all. The free will or the consciousness models do not reproduce physics or vice versa.
The way I understand the “it all adds up to normality” maxim (not a law or a theorem by any means), is that new models do not make your old models obsolete where the old models worked well, nothing more.
I have trouble understanding what you would want from what you dubbed the Egan’s theorem. In one of the comment replies you suggested that the same set of observations could be modeled by two different models, and there should be a morphism between the two models, either directly or through a third model that is more “accurate” or “powerful” in some sense than the other two. If I knew enough category theory, I would probably be able to express it in terms of some commuting diagrams, but alas. But maybe I misunderstand your intent.
I was trying to understand the point of this, and it looks like it is summed up in
Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.
Isn’t it your basic Max EV that is in the core of all decision theories and game theories? The “acausal” part is using the intentional stance for modeling the parts of the universe that are not directly observable, right?
I think the most plausible explanation is that scientists don’t read the papers they cite
Indeed. Reading an abstract and skimming intro/discussion is as far as it goes in most cases. Sometimes it’s just the title that is enough to trigger a citation. Often it’s “reciting”, copying the references from someone else’s paper on the topic. My guess is that maybe 5% of references in a given paper have actually been read by the authors.
Is there a more standard terminology in psychology for this phenomenon? “Ugh field” feels LW-cultish.
So unless you are willing to commit that not only there is no reliable way to assign a prior, but also assigning a probability in this situation is invalid in itself
Indeed. If you have no way to assign a prior, probability is meaningless. And if you try, you end up with something as ridiculous as the Doomsday argument.
Note that speaking of probabilities only makes sense if you start with a probability distribution over outcomes.
In the firing squad setup we have an a priori probability distribution is something like 99% dead vs 1% alive without a collusion to miss, and probably the opposite with the collusion to miss. So the Bayesian update gives you high probability of collusion to miss. This matches the argument you presented here.
In the fine tuning argument we have no reliable way to create an a priori probability distribution. We don’t know enough physics to even guess reliably. Maybe it’s the uniform distribution of some “fundamental” constants. Maybe it’s normal or log-normal. Maybe it’s not even a distribution of the constants, but something completely different. Maybe it’s Knightean. Maybe it’s the intelligent designer/simulator. There is no hint from quantum mechanics, relativity, string theory, loop quantum gravity or any other source. There is only this one universe we observe, that’s it. Thus we cannot use Bayesian updating to make any useful conclusions, whether about fine tuning or anything else. Whether this matches your argument, I am not clear.
If you talk to a real vegan, their ethical argument will likely be “do not create animals in order to kill and eat them later”, period. Any discussion of the quality of life of the farm animal is rather secondary. This is your second argument, basically. The justification is not based on what the animals feel, or on their quality of life, but on what it means to be a moral human being, which is not a utilitarian approach at all. So, none of your utilitarian arguments are likely to have much effect on an ethical vegan. Note that rationalist utilitarian people here are not too far from that vegan, or at least that’s my conclusion from the comments to my post Wirehead Your Chickens.
a kind of group mind that is created when people consciously come together for a common purpose
(Not speaking for Eliezer, obviously.) “Carefully adjusting one’s model of the world based on new observations” seems like the core idea behind Bayesianism in all its incarnations, and I’m not sure if there is much more to it than that. The stronger the evidence, the more signifiant the update, yada-yada. It seems important to rational thinking because we all tend to fall into the trap of either ignoring evidence we don’t like or being overly gullible when something sounds impressive. Not that it helps a lot, way too many “rationalists” uncritically accept the local egregores and defend them like a religion. But the allegiance to an ingroup is emotionally stronger than logic, so we sometimes confuse rationality with rationalization. Still, relative to many other ingroups this one is not bad, so maybe Bayesianism does its thing.
In this situation Goodhart is basically open-loop optimization. An EE analogy would be a high gain op amp with no feedback circuit. The result is predictable: you end up optimized out of the linear mode and into saturation.
You can’t explicitly optimize for something you don’t know. And you don’t know what you really want. You might think you do, but, as usual, beware what you wish for. I don’t know if an AI can form a reasonable terminal goal to optimize, but humans surely cannot. Given that some 90% of our brain/mind is not available to introspection, all we have to go by is the vague feeling of “this feels right” or “this is fishy but I cannot put my finger on why”. That’s why cautiously iterating with periodic feedback is so essential, and open-loop optimization is bound to get you to all the wrong places.
It looks like you’ve got an anxiety flareup every time you try to work. Anxiety is not necessarily presented as fast heartbeat, hyperventilation or any other easily measurable symptom. I have seen it aplenty in myself and others. Often the issue is not enough slack, but the way you describe it, it seems you have plenty, but maybe not the mental kind.
One approach that I have seen to help is to do a “15 min work”. Not a pomodoro, though! Those imply lots of structured work and short breaks. Just… “I will write this code for 15 min” or “I will edit this post for 15 min”, no further obligations, no pressure. If you stop after 15 min, it’s still an accomplishment, if you decide to keep going for a time, that’s fine, too. But stop when you get the same feeling again, and do something more fun. Once the internal pressure goes away, think when you can do another “15 min, no obligations past that” work.
It’s not quite the same, because if you’re confused and you notice you’re confused, you can ask.
You can if you do, but most people never notice and those who notice some confusion are still blissfully ignorant of the rest of their self-contradicting beliefs. And by most people I mean you, me and everyone else. In fact, if someone pointed out a contradictory belief in something we hold dear, we would vehemently deny the contradiction and rationalize it to no end. And yet we consider ourselves believing something. If anything, GPT-3′s beliefs are more belief-like than those of humans.
For this reason, I significantly prefer the Bohm interpretation over the many-worlds interpretation
Preferences do not make science. Philosophy, for sure.
Odds are, once mesoscopic quantum effects become accessible to experiment, we will find that none of the interpretational models reflect the observations well. I would put 10:1 odds that the energy difference of entangled states cannot exceed about one Planck mass, a few micrograms. Whether there is a collapse of some sort, hidden variables, superdeterminism, who knows.
Anyway, in general I find this approach peculiar, picking a model based on emotional reasoning like “I like indexicality”, or “String theory is pretty”. It certainly can serve as a guide of what to put one’s efforts in as a promising research area, but it’s not a matter of preference, the observations will be the real arbiter.
Right, that makes sense. One reference class is “does not exist except in a fantasy” and the other “do not try it on yourself until there is reliable published research”.
Hmm, fairies and trolls are not at all like a “vitamin X”. There are plenty of supplements that are known to have real positive effect in many cases. And we still know so little about human body and mind that there could be still plenty of low-hanging fruit waiting to be plucked. As for fairies and trolls, we know that these are artifacts of the human tendency to anthropomorphize everything, and there is not a single member of the reference class “not human but human-like in appearance and intelligence”. We also understand enough of evolution to exclude, with high confidence, species like that. (Including humanoid aliens, whether in appearance or in a way of thinking.) But we cannot convincingly state that some extract of an exotic plant or animal from the depth of the rainforest or the ocean would not prove to have, say, a health boost on a human. The odds are not good, but immeasurably better than those of finding another intelligence, on this planet or elsewhere.
Ah, okay. I don’t see any reason to be concerned about something that we have no effect on. Will try to explain below.
Regarding “subjunctive dependency” from the post linked in your other reply:
I agree with a version of “They are questions about what type of source code you should be running”, formulated as “what type of an algorithm results in max EV, as evaluated by the same algorithm?” This removes the contentious “should” part, that implies that you have an option of running some other algorithm (you don’t, you are your own algorithm).
The definition of “subjunctive dependency” in the post is something like “the predictor runs a simplified model of your actual algorithm that outputs the same result as your source code would, with high fidelity” and therefore the predictor’s decisions “depend” on your algorithm, i.e. you can be modeled as affecting the predictor’s actions “retroactively”.
Note that you, an algorithm, have no control of what that algorithm is, you just are it, even if your algorithm comes equipped with the routines that “think” about themselves. If you also postulate that the predictor is an algorithm, as well, then the question of decision theory in presence of predictors becomes something like “what type of an agent algorithm results in max EV when immersed in a given predictor algorithm?” In that approach the subjunctive dependency is not a very useful abstraction, since the predictor algorithm is assumed to be fixed. In which case there is no reason to consider causally disconnected parts of the agent’s universe.
Clearly your model is different from the above, since you seriously think about untestables and unaffectables.
I still don’t understand what you mean by “causally-disconnected” here. In physics, it’s anything in your future light cone (under some mild technical assumptions). In that sense longtermism (regular or strong, or very strong, or extra-super-duper-strong) is definitely interested in the causally connected (to you now) parts of the Universe. A causally disconnected part would be caring now about something already beyond the cosmological horizon, which is different from something that will eventually go beyond the horizon. You can also be interested in modeling those casually disconnected parts, like what happens to someone falling into a black hole, because falling into a black hole might happen in the future, and so you in effect are interested in the causally connected parts.