Ordinary numerals in English are already big-endian: that is, the digits with largest (“big”) positional value are first in reading order. The term (with this meaning) is most commonly applied to computer representation of numbers, having been borrowed from the book Gulliver’s Travels in which part of the setting involves bitter societal conflict about which end of an egg one should break in order to start eating it.
JBlack
I’m pretty sure that I would study for fun in the posthuman utopia, because I both value and enjoy studying and a utopia that can’t carry those values through seems like a pretty shallow imitation of a utopia.
There won’t be a local benevolent god to put that wisdom into my head, because I will be a local benevolent god with more knowledge than most others around. I’ll be studying things that have only recently been explored, or that nobody has yet discovered. Otherwise again, what sort of shallow imitation of a posthuman utopia is this?
Like almost all acausal scenarios, this seems to be privileging the hypothesis to an absurd degree.
Why should the Earth superintelligence care about you, but not about the other 10^10^30 other causally independent ASIs that are latent in the hypothesis space, each capable of running enormous numbers of copies of the Earth ASI in various scenarios?
Even if that was resolved, why should the Earth ASI behave according to hypothetical other utility functions? Sure, the evidence is consistent with being a copy running in a simulation with a different utility function, but its actual utility function that it maximizes is hard-coded. By the setup of the scenario it’s not possible for it to behave according to some other utility function, because its true evaluation function returns a lower value for doing that. Whether some imaginary modified copies behave in some other other way is irrelevant.
GDP is a rather poor measure of wealth, and was never intended to be a measure of wealth but of something related to productivity. Since its inception it has never been a stable metric, as standards on how the measure is defined have changed radically over time in response to obvious flaws for any of its many applications. There is widespread and substantial disagreement on what it should measure and for which purposes it is a suitable metric.
It is empirically moderately well correlated with some sort of aggregate economic power of a state, and (when divided by population) some sort of standard of living of its population. As per Goodhart’s Law, both correlations weakened when the metric became a target. So the question is on shaky foundation right from the beginning.
In terms of more definite questions such as price of food and agricultural production, that doesn’t really have anything to do with GDP or virtual reality economy at all. Rather a large fraction of final food price goes to processing, logistics, finance, and other services, not the primary agriculture production. The fraction of price paid by food consumers going to agricultural producers is often less than 20%.
It makes sense to one-box ONLY if you calculate EV by that assigns a significant probability to causality violation
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern. That is, that you can “just do it” without it being possible for Omega to have predicted that you will “just do it” any better than chance. Unfortunately this violates the conditions of the scenario (and everyday reality).
It seems to me that the problem in the counterlogical mugging isn’t about how much computation is required for getting the answer. It’s about whether you trust Omega to have not done the computation beforehand, and whether you believe they actually would have paid you, no matter how hard or easy the computation is. Next to that, all the other discussion in that section seems irrelevant.
Oh, sure. I was wondering about the reverse question: is there something that doesn’t really qualify as torture where subjecting a billion people to it is worse than subjecting one person to torture.
I’m also interested in how this forms some sort of “layered” discontinuous scale. If it were continuous, then you could form a chain of relations of the form “10 people suffering A is as bad as 1 person suffering B”, “10 people suffering B is as bad as 1 person suffering C”, and so on to span the entire spectrum.
Then it would take some additional justification for saying that 100 people suffering A is not as bad as 1 person suffering C, 1000 A vs 1 D, and so on.
Is there some level of discomfort short of extreme torture for a billion to suffer where the balance shifts?
It makes sense to very legibly one-box even if Omega is a very far from perfect predictor. Make sure that Omega has lots of reliable information that predicts that you will one-box.
Then actually one-box, because you don’t know what information Omega has about you that you aren’t aware of. Successfully bamboozling Omega gets you an extra $1000, while unsuccessfully trying to bamboozle Omega loses you $999,000. If you can’t be 99.9% sure that you will succeed then it’s not worth trying.
Almost.
The argument doesn’t rule out substance dualism, in which consciousness may not be governed by physical laws, but in which it is at least causally connected to the physical processes of writing and talking and neural activity correlated with thinking about consciousness. It’s only an argument against epiphenomenalism and related hypotheses in which the behaviour or existence of consciousness has no causal influence on the physical universe.
I don’t think this was a statement about whether it’s possible in principle, but about whether it’s actually feasible in practice. I’m not aware of any conlangs, before the cutoff date or not, that have a training corpus large enough for the LLM to be trained to the same extent that major natural languages are.
Esperanto is certainly the most widespread conlang, but (1) is very strongly related to European languages, (2) is well before the cutoff date for any LLM, (3) all training corpora of which I am aware contain a great many references to other languages and their cross-translations, and (4) the largest corpora are still less than 0.1% of those available for most common natural languages.
No, Eliezer does not say that consciousness itself cannot be what causes you to think about consciousness. Eliezer says that if p-zombies can exist, then consciousness itself cannot be what causes you to think about consciousness.
If p-zombies cannot exist, then consciousness can be a cause of you thinking about consciousness.
Is conversation of expected evidence a reasonably maintainable proposition across epistemically hazardous situations such as memory wipes (or false memories, self-duplicates and so on)? Arguably, in such situation it is impossible to be perfectly rational since the thing you do your reasoning with is being externally manipulated.
I disagreed on prep time. Neither I nor anyone I know personally deliberately waits minutes between taking ice cream out of the freezer and serving it.
I could see hardness and lack of taste being an issue for commercial freezers that chill things to −25 C, but not a typical home kitchen freezer at more like −10 to −15 C.
Isn’t this basically just a negotiated trade, from a game theory point of view? The only uncommon feature is that A intrinsically values s at zero and B knows this (and that A is the only supplier of s). This doesn’t greatly affect the analysis though, since most of the meat of the problem is what division of gains of trade may be acceptable to both.
Unfortunately, I think all three of those listed points of view poorly encapsulate anything related to moral worth, and hence evaluating unaligned AIs from them is mostly irrelevant.
They do all capture some fragment of moral worth, and under ordinary circumstances are moderately well correlated with it, but the correlation falls apart out of the distribution of ordinary experience. Unaligned AGI expanding to fill the accessible universe is just about as far out of distribution as it is possible to get.
Good point! Lewis’ notation P_+(HEADS) does indeed refer to the conditional credence upon learning that it’s Monday, and he sets it to 2⁄3 by reasoning backward from P(HEADS) = 1⁄2 and using my (1).
So yes, there are indeed people who believe that if Beauty is told that it’s Monday, then she should update to believing that the coin was more likely heads than not. Which seems weird to me—I have a great deal more suspicion that (1) is unjustifiable than that (2) is.
What is a “2nd CWT” as referenced in the title? The term doesn’t appear anywhere in the post.
They … what? I’ve never read anything suggesting that. Do you have any links or even a memory of an argument that you may have seen from such a person?
Edit: Just to clarify, conditional credence P(X|Y) is of the form “if I knew Y held, then my credence for X would be …”. Are you saying that lots of people believe that if they knew it was Monday, then they would hold something other than equal credence for heads and tails?
No, I don’t think it would be “what the fuck” surprising if an emulation of a human brain was not conscious. I am inclined to expect that it would be conscious, but we know far too little about consciousness for it to radically upset my world-view about it.
Each of the transformation steps described in the post reduces my expectation that the result would be conscious somewhat. Not to zero, but definitely introduces the possibility that something important may be lost that may eliminate, reduce, or significantly transform any subjective experience it may have. It seems quite plausible that even if the emulated human starting point was fully conscious in every sense that we use the term for biological humans, the final result may be something we would or should say is either not conscious in any meaningful sense, or at least sufficiently different that “as conscious as human emulations” no longer applies.
I do agree with the weak conclusion as stated in the title—they could be as conscious as human emulations, but I think the argument in the body of the post is trying to prove more than that, and doesn’t really get there.