I was going to say “bootstraps don’t work that way”, but since the validation happens on the future end, this might actually work.
Since Eliezer is a temporal reductionist, I think he might not mean “temporally continuous”, but rather “logical/causal continuity” or something similar.
Discrete time travel would also violate temporal continuity, by the way.
But where do we get Complexity(human)?
Note: since most global warming statistics are presented to the American layman in degrees Fahrenheit, it is probably useful to convert 0.7 K to 1.26 F.
One might think eliminativism is metaphysically simpler but reductionism doesn’t really posit more stuff, more like just allowing synonyms for various combinations of the same stuff.
I don’t think Occam’s razor is the main justification for eliminativism. Instead, consider the allegory of the wiggin: if a category is not natural, useful, or predictive, then in common English we say that the category “isn’t real”.
The Transcension hypothesis attempts to answer the Fermi paradox by saying that sufficiently advanced civilizations nearly invariably leave their original universe for one of their own making. By definition, a transcended civilization would have the power to create or manipulate new universes or self-enclosed pockets; this would likely require a very advanced understanding of physics. This understanding would probably be matched in other sciences.
This is my impression from a few minutes of searching. I do not know why you asked the question of “what it is” when a simple search would have been faster. I do not expect that many people here are very knowledgeable about this particular hypothesis, and this is a basic question anyway.
The hypothesis does not seem very likely to me. It claims that transcendence is the inevitable evolutionary result of civilizations, but in nature we observe many niches. Civilizations are less like individuals in a species, and more like species themselves. And since a single civilization can colonize a galaxy, it would only take one civilization to produce a world unlike the one we see today—there would have to be not only no other niches, but no mutants either.
I don’t think Transcension is a term commonly used here. This question would probably be better answered by googling.
I think that people treat IQ as giving more information than it actually does. The main disadvantage is that you will over-adjust for any information you receive.
What does it mean to “revise Algorithm downward”? Observing dCapabilitydCompute doesn’t seem to indicate much about the current value of Algorithm. Or is Algorithm shorthand for “the rate of increase of Algorithm”?
Back-of-the-envelope equilibrium estimate: if we increase the energy added to the atmosphere by 1%, then the Stefan-Boltzmann law says that a blackbody would need to be 0.010.25 warmer, or 0.4%, to radiate that much more. At the Earth’s temperature of ~288 K, this would be ~0.7 K warmer.
This suggests to me that it will have a smaller impact than global warming. Whatever we use to solve global warming will probably work on this problem as well. It’s still something to keep in mind, though.
I agree that #humans has decreasing marginal returns at these scales—I meant linear in the asymptotic sense. (This is important because large numbers of possible future humans depend on humanity surviving today; if the world was going to end in a year then (a) would be better than (b). In other words, the point of recovering is to have lots of utility in the future.)
I don’t think most people care about their genes surviving into the far future. (If your reasoning is evolutionary, then read this if you haven’t already.) I agree that many people care about the far future, though.
Epistemic status: elaborating on a topic by using math on it; making the implicit explicit
From an collective standpoint, the utility function over #humans looks like this: it starts at 0 when there are 0 humans, slowly rises until it reaches “recolonization potential”, then rapidly shoots up, eventually slowing down but still linear. However, from an individual standpoint, the utility function is just 0 for death, 1 for life. Because of the shape of the collective utility function, you want to “disentangle” deaths, but the individual doesn’t have the same incentive.
Useful work consumes negentropy. A closed system can only do so much useful work. (However, reversible computations may not require work.)
What do you mean by infinite IQ? If I take you literally, that’s impossible because the test outputs real numbers. But maybe you mean “unbounded optimization power as time goes to infinity” or something similar.
I’m not sure how magically plausible this is, but Dumbledore could have simplified the chicken brain dramatically. (See the recent SSC posts for how the number of neurons of an animal correlates with our sense of its moral worth.) Given that the chicken doesn’t need to eat, reproduce, or anything else besides stand and squawk, this seems physically possible. It would be ridiculously difficult without magic, but wizards regularly shrink their brains down to animal size, so apparently magic is an expert neuroscientist. If this was done, the chicken would have almost no moral worth, so it would be permissible to create and torture it.
Another vaguely disconcertingly almost self-aware comment by the bot. It can, in fact, write impressively realistic comments in 10 seconds.
I think “typical X does Y” is shorthand for “many or most Xs do Y”.
That last parenthetical remark is funny when you consider how GPT-2 knows nothing new but just reshuffles the “interesting and surprising amount of writing by smart people”.
Ah. It’s a bot. I suppose the name should have tipped me off. At least I get Being More Confused By Fiction Than Reality points.
How did you write that in less than a minute?