Yeah, it’s not empirically meaningful over interpretations of QM (at least the ones which don’t make weird observable-in-principle predictions). Still meaningful as part of a simplicity prior, the same way that e.g. rejecting a simulation hypothesis is meaningful.
Adele Lopez
And they were probably right about “action-at-a-distance” being impossible (i.e. locality), but it took General Relativity to get a functioning theory of gravity that satisfied locality.
(Incidentally, one of the main reasons I believe the many worlds interpretation is that you need something like that for quantum mechanics to satisfy locality.)
Wikipedia has some ideas for stellar engines, the simplest being essentially half a Dyson sphere.
I would guess that a lot (perhaps most) of time, “salvage epistemology” is a rationalization to give to rationalists to justify their interest in woo, as opposed to being the actual reason they are interested in the woo. (I still agree that the concept is likely hazardous to some people.)
No, because that point is for the case where he does want free speech, just that there are other factors that might interfere with that. This point covers the case where he doesn’t actually want free speech (i.e. wants it for me but not thee).
That is indeed a point critics are making—though usually it’s more about the hypocrisy. I’ve seen this brought up recently in particular: https://www.theguardian.com/technology/2016/feb/03/elon-musk-blogger-tesla-motors-model-x
Seems more likely it’s just corruption.
[Epistemic status: very speculative]
One ray of hope that I’ve seen discussed is that we may be able to do some sort of acausal trade with even an unaligned AGI, such that it will spare us (e.g. it would give us a humanity-aligned AGI control of a few stars, in exchange for us giving it control of several stars in the worlds we win).
I think Eliezer is right that this wouldn’t work.
But I think there are possible trades which don’t have this problem. Consider the scenario in which we Win, with an aligned AGI taking control of our future light-cone. Assuming the Grabby aliens hypothesis is true, we will eventually run into other civilizations, which will either have Won themselves, or are AGIs who ate their mother civilizations. I think Humanity will be very sad at the loss of the civilizations who didn’t make it because they failed at the alignment problem. We might even be willing to give up several star systems to an AGI who kept its mother civilization intact on a single star system. This trade wouldn’t have the issue Eliezer brought up, since it doesn’t require us to model such an AGI correctly in advance, only that that AGI was able to model Humanity well enough to know it would want this and would honor the implicit trade.
So symmetrically, we might hope that there are alien civilizations that both Win, and would value being able to meet alien civilizations strongly enough. In such a scenario, “dignity points” are especially aptly named: think of how much less embarrassing it would be to have gotten a little further at solving alignment when the aliens ask us why we failed so badly.
It seems relatively plausible that you could use a Limited AGI to build a nanotech system capable of uploading a diverse assortment of (non-brain, or maybe only very small brains) living tissue without damaging them, and that this system would learn how to upload tissue in a general way. Then you could use the system (not the AGI) to upload humans (tested on increasingly complex animals). It would be a relatively inefficient emulation, but it doesn’t seem obviously doomed to me.
Probably too late once hardware is available to do this though.
So in a “weird experiment”, the infrabayesian starts by believing only one branch exists, and then at some point starts believing in multiple branches?
If there aren’t other branches, then shouldn’t that be impossible? Not just in practice but in principle.
This article (by Eliezer Yudkowsky) explains why the suggestion in your 2nd paragraph won’t work: https://arbital.com/p/goodharts_curse/
I’m afraid I’ll butcher the argument in trying to summarize, but essentially it is because even slight misalignments will get blown up (i.e. it will pursue the areas where it is misaligned at the expense of everything else) at increasing optimization pressure. So you might have something aligned fairly well, and at the test optimization level, you can check that it is indeed aligned pretty well—but then when you turn up the pressure, it will find weaker points in the specification and optimize for that instead. And this problem recurs at the meta-level, so there’s not an obvious way to say “well obviously just don’t do that” in a way that would actually work.
The problem with asking the AI how it will solve your problem is that if it is misaligned, it will just lie to you if that helps it complete its objective more effectively.
You can get some weird things if you are doing some weird experiment on yourself where you are becoming a Schrödinger cat and doing some weird stuff like that, you can get a situation where multiple copies of you exist. But if you’re not doing anything like that, you’re just one branch, one copy of everything.
Why does it matter that you are doing a weird experiment, versus the universe implicitly doing the experiment for you via decoherence? If someone else did the experiment on you without your knowledge, does infrabayesianism expect one copy or multiple copies?
Frequentist probability is objective because it defined in terms of falsifiable real-world observations. An objective definition of probability can be used to resolve disagreements between scientists. A subjective definition cannot.
The definition in terms of limits is literally something that can never be observed. Limits can converge after arbitrarily long stretches of erratic behavior, so it can’t ever be falsified either. And trials are not necessarily independent. Assuming that in practice things will “be well behaved” is mathematically equivalent to choosing a prior, and thus inherits whatever issues you have with priors. So the objectivity and falsifiability of frequencies is an illusion.
Because, if you endorse utilitarianism then it generates a lot of confusion about the theory of rational agents, which makes you think there are more unsolved questions than there really are[2].
Are you alluding to agents with VNM utility functions here?
What would be 1⁄2 in base 2, or 1⁄5 in base 5? I think that p-adic numbers also make an exception for fractions that contain the base in the denominator.
Ah yeah, what I said was wrong. I was thinking of the completion which is a field (i.e. allows for division by all non-zero members, among other things) iff is prime. The problem with 10-adics can be seen by checking that …2222 * …5555 = …0000 = 0, so doing the completion has no hope of making it a field.
I’m a programmer, but my website uses (the ring of rational adeles) as the favicon :D
Awesome exploration, you managed to hit a deep and interesting subject!
BTW, it’s perfectly legitimate to do 10-adic integers, or for any n (see here). The reason primes are preferred is because of this issue you discovered:
But there is still problem with fractions like 1⁄2 or 1⁄5. There is no integer, not even infinite one, such that if we multiplied it by 2 or by 5, the last digit of the result would be 1.
That doesn’t happen if you use a prime number instead, so you get all"fractions" as"integers".[ETA: This is wrong, see comment below.]If you want things like , you’ll need to do a limiting process (called the completion) just like you would to complete to get . The completion of -adic integers is called , and you need to use a -absolute value called the -adic valuation to do the Cauchy completion.
The thing I find most interesting is the fact that if you start with plain old , you can use any of these absolute values to complete it, and get . The only other way you can complete like this is by using the standard absolute value to get . This is Ostrowski’s theorem. So there’s a completion for every prime number, plus an extra one for . Number theorists will sometimes talk as if this extra completion came from a mysterious new prime number, called the “prime at infinity”. This actually does work a lot like a prime number in lots of contexts, for example in the Galois theory of finite field extensions.
Abstracting being a useful move isn’t in dispute here. The problem is that it’s a path of least resistance, which means that you’re liable to choose it without thinking, even when it isn’t the best move. Giving yourself the space to notice that choice allows you to make a better choice when there is one.
I’ve found that even when doing category theory, my lack of this skill has often made things much harder than they needed to be. For example, when I tried understanding the Yoneda lemma as something like “objects are equivalent to the morphisms into them”, it never quite clicked, and worse, I didn’t even notice that I was missing something important (the difference between the Yoneda lemma vs the Yoneda embedding). A clearer understanding only came when I tried writing an expository proof, and tried understanding the poset version, which were both steps in the concrete direction.
Even if it were true, how would they know it was a propulsion technology?
I’m very sure it’s not this either. Alcubierre drives have several issues, such as requiring negative energy densities, not having any way of accelerating them, or requiring astronomical amounts of energy.
This video debunks some of the Pentagon’s UFO footage, and I have no reason to doubt that the other videos have similarly mundane explanations.