This is a diaspora survey, for the pan-rationalist community.
Transfuturist
I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don’t.
I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.
Lumifer rejects him because he thinks Simon Anhold is simply a person who isn’t serious but a hippy.
How about you let Lumifer speak for Lumifer’s rejection, rather than tilting at straw windmills?
The equivocation of ‘created’ in those four points are enough to ignore it entirely.
I’m curious why this was downvoted. The last statement, which has political context?
Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa’s stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?
We don’t need to describe the scenarios precisely physically. All we need to do is describe it in terms of the agent’s epistemology, with the same sort of causal surgery as described in Eliezer’s TDT. Full epistemological control means you can test your AI’s decision system.
This is a more specific form of the simulational AI box. The rejection of simulational boxing I’ve seen relies on the AI being free to act and sense with no observation possible, treating it like a black box, and somehow gaining knowledge of the parent world through inconsistencies and probabilities and escaping using bugs in its containment program. White-box simulational boxing can completely compromise the AI’s apparent reality and actual abilities.
Stagnation is actually a stable condition. It’s “yay stability” vs. “boo instability,” and “yay growth” vs. “boo stagnation.”
(Ve could also be copied, but it would require coping of all world).
Why would that be the case? And if it were the case, why would that be a problem?
Resurrect one individual, filling gaps with random quantum noise.
Resurrect all possible individuals with all combinations of noise.
That is a false trichotomy. You’re perfectly capable of deciding to resurrect some sparse coverage of the distribution, and those differences are not useless. In addition, “the subject is almost exactly resurrected in one of the universes” is true of both two and three, and you don’t have to refer to spooky alternate histories to do it in the first place.
8(
Quals are the GRE, right?
...Okay? One in ten sampled individuals will be gay. You can do that. Does it really matter when you’re resurrecting the dead?
Your own proposal is to only sample one, and call the inaccuracy “acausal trade,” which isn’t even necessary in this case. The AI is missing 100 bits. You’re already admitting many-worlds. So the AI can simply draw those 100 bits out of quantum randomness, and in each Everett branch, there will be a different individual. The incorrect ones you could call “acausal travelers,” even though you’re just wrong. There will still be the “correct” individual, the exact descendant of this reality’s instance, in one of the Everett branches. The fact that it is “correct” doesn’t even matter, there is only ever “close enough,” but the “correct” one is there.
What’s wrong with gaps? This is probabilistic in the first place.
No, that’s easy to grasp. I just wonder what the point is. Conservation of resources?
The evidence provided of any dead person produces a distribution on human brains, given enough computation. The more evidence there is, the more focused the distribution. Given post-scarcity, the FAI could simply produce many samples on each distribution.
This is certainly a clever way of producing mind-neighbors. I find problems with these sorts of schemes for resurrection, though. Socioeconomic privilege, tragedy of the commons, and data rot, to be precise.
You’re confusing the intuitive notion of “simple” with “low Kolmogorov complexity”
I am using the word “simple” to refer to “low K-complexity.” That is the context of this discussion.
It does if you look at the rest of my argument.
The rest of your argument is fundamentally misinformed.
Step 1: Stimulation the universe for a sufficiently long time.
Step 2: Ask the entity now filling up the universe “is this an agent?”.
Simulating the universe to identify an agent is the exact opposite of a short referent. Anyway, even if simulating a universe were tractable, it does not provide a low complexity for identifying agents in the first place. Once you’re done specifying all of and only the universes where filling all of space with computronium is both possible and optimal, all of and only the initial conditions in which an AGI will fill the universe with computronium, and all of and only the states of those universes where they are actually filled with computronium, you are then left with the concept of universe-filling AGIs, not agents.
You seem to be attempting to say that a descriptor of agents would be simple because the physics of our universe is simple. Again, the complexity of the transition function and the complexity of the configuration states are different. If you do not understand this, then everything that follows from this is bad argumentation.
What do you mean by that statement? Kolmogorov complexity is a property of a concept. Well “reducing entropy” as a concept does have low Kolmogorov complexity.
It is framed after your own argument, as you must be aware. Forgive me, for I too closely patterned it after your own writing. “For an AGI to be successful it is going to have to be good at reducing entropy globally. Thus reducing entropy globally must be possible.” That is false, just as your own argument for a K-simple general agent specification is false. It is perfectly possible that an AGI will not need to be good at recognizing agents to be successful, or that an AGI that can recognize agents generally is not possible. To show that it is, you have to give a simple algorithm, which your universe-filling algorithm is not.
It reminded me of reading Simpsons comics, is all.
Krusty’s Komplexity Kalkulator!
Most of those who haven’t ever been on Less Wrong will provide data for that distinction. It isn’t noise.