I used to be young, but now I’m just immature
rosyatrandom
If the AI can create a perfect simulation of you and run several million simultaneous copies in something like real time, then it is powerful enough to determine through trial and error exactly what it needs to say to get you to release it.
You mean this table? :)
(This and the one I made below can be seen properly at http://tinyurl.com/lwgttable , along with the ATT vs DEF tables I worked out the outcomes from)
Hmm. Unless this has gone wrong, the best combo is Sword 1 and Armour 4, with Sword 1/Armour 1 being close). But if you bank on people choosing 1⁄4, then 1⁄1 will beat them.
NB: Yes, I made a lot of mistakes and edits to get here, and probably have still made some...
VS a1 a1 a1 a1 a2 a2 a2 a2 a3 a3 a3 a3 a4 a4 a4 a4 s1 s2 s3 s4 s1 s2 s3 s4 s1 s2 s3 s4 s1 s2 s3 s4 a1 s1 0.5 0 0 0 1 0 0 0 0 0 0 0 1 1 1 1 a1 s2 1 0.5 1 1 1 0.5 1 1 1 1 1 1 1 1 1 1 a1 s3 1 0 0.5 0 1 1 1 1 0 0 0 0 1 1 1 1 a1 s4 1 0 1 0.5 0 0 0 0 1 1 1 1 0 0 0 0 a2 s1 0 0 0 1 0.5 0 0 1 0 0 0 1 1 1 0 1 a2 s2 1 0.5 0 1 1 0.5 0 1 1 1 1 1 1 1 0 1 a2 s3 1 0 0 1 1 1 0.5 1 0 0 0 0 1 1 1 1 a2 s4 1 0 0 1 0 0 0 0.5 1 1 1 1 0 0 0 0 a3 s1 1 0 1 0 1 0 1 0 0.5 0 1 0 1 0 1 0 a3 s2 1 0 1 0 1 0 1 0 1 0.5 1 0 1 0 1 0 a3 s3 1 0 1 0 1 0 1 0 0 0 0.5 0 1 1 1 0 a3 s4 1 0 1 0 0 0 1 0 1 1 1 0.5 0 0 0 0 a4 s1 0 0 0 1 0 0 0 1 0 0 0 1 0.5 0.5 0 1 a4 s2 0 0 0 1 0 0 0 1 1 1 0 1 0.5 0.5 0 1 a4 s3 0 0 0 1 1 1 0 1 0 0 0 1 1 1 0.5 1 a4 s4 0 0 0 1 0 0 0 1 1 1 1 1 0 0 0 0.5 _ _ 10.5 1 6.5 9.5 9.5 4 6.5 9.5 8.5 7.5 8.5 9.5 11 9 7.5 9.5
- 12 Oct 2010 16:37 UTC; 1 point) 's comment on Swords and Armor: A Game Theory Thought Experiment by (
And then there is Kurt Vonnegut’s warning from Mother Night: “We are what we pretend to be, so we must be careful about what we pretend to be.”
Also anecdotal: I have liked girls continuously since the age of 4. I do not recommend this....
Here’s why this is distasteful:
That infant has either experienced enough to affect their development, or has shown individuality of some kind that will be developed further as they mature. An infant is always in the stage of ‘becoming,’ and as such their future selves are to some degree already in evidence. Lose the infant, lose the future—and that is the loss that most people find tragic.
Simple cheat solution:
“the Council of Genies has created new, updated rules which ban any unwanted side-effects for the person who makes the wish or any of their loved one’s. ”
I would argue that I love everyone, by default, especially people in this kind of cruel situation. Therefore this would count as an unwanted side-effect.
Another late response from me as I read through this series again:
“I realized why the area under a curve is the anti-derivative, realized how truly beautiful it was”
Would this be that the curve is the rate-of-change of the area (as the curve goes up, so does the area beneath it)?
No, but it’s there as a baseline.
So in the original scenario above, either:
the AI’s lying about its capabilities, but has determined regardless that the threat has the best chance of making you release it
the AI’s lying about its capabilities, but has determined regardless that the threat will make you release it
the AI’s not lying about its capabilities, and has determined that the threat will make you release it
Of course, if it’s failed to convince you before, then unless its capabilities have since improved, it’s unlikely that it’s telling the truth.
Well, I made the mistake of looking at one of the pictures from Bucha, and…
… I don’t think I’m going to be feeling rational about anything for a while. I have 2 small kids, and another on the way, and right now all I want to do is cry while tearing the throat out of the people responsible.
I don’t believe that that is a necessary assumption at all; the conscious state is still an abstractable representation, and if it maps to a dynamic process that itself can map to a temporally-connected collection of brain-states, then that is just more layers of abstraction.
The Boltzmann Brain could easily be not a brain-state representation, but a conscious-state representation.
Even if the Boltzmann brain is completely chaotic, internally it contains the same structures/processes as whatever we find meaningful about Napoleon’s brain. It is only by external context that we can claim that those things are now meaningful.
For us, that may be a valid distinction—how can we talk to or interact with the brain? It’s essentially in it’s own world.
For the Boltzmann!Napoleon, the distinction isn’t remotely meaningful. It’s in it’s own world, and it can’t talk to us, interact with us, or know we are here.
Even if the internal processes of the brain are nothing more than randomised chance, it maps to ‘real’, causal processes in brains in ‘valid’ ontological contexts.
The question is—do those contexts/brains exists, and is there any real distinction between the minds produced by Boltmann!Napoleon, Virtual!Napoleon, etc.? I would say yes, and no. Those contexts exist, and we are really discussing one mind that corresponds to all those processes .
As to why I would say that, it’s essentially Greg Egan’s Dust hypothesis/Max Tegmark’s Mathematical Universe thing.
That’s true; despite having the same damage per minute, the red swords stats are harmed more by armour damage reduction (since if x<y, (x-a)y < x(y-a)).
It should be noted that the Armour Damage stat only affects a Sword’s Damage stat, while Dodge is global: Mitigated Damage per minute = (Sword Damage + Armour Damage) Speed (1-Dodge)
Most of civilisation right now seems to be one giant gas-lighting immoral maze, where any effort to point out or mitigate the massive problems we have is sneered at or ignored.
I think that while a sleek decoding algorithm and a massive look-up table might be mathematically equivalent, they differ markedly in what sort of process actually carries them out, at least from the POV of an observer on the same ‘metaphysical level’ as the process. In this case, the look-up table is essentially the program-that-lists-the-results, and the algorithm is the shortest description of how to get them. The equivalence is because, in some kind of sense, process and results imply each other. In my mind, this a bit like some kind of space-like-information and time-like-information equivalence, or as that between a hologram and the surface it’s projected from.
In the end, how are we to ever prefer one kind of description over the other? I can only think that it either comes down to some arbitrary aesthetic appreciation of elegance, or maybe some kind of match between the form of description and how it fits in with our POV; our minds can be described in many ways, but only one corresponds directly with how we observe ourselves and reality, and we want any model to describe our minds with as minimal re-framing as possible.
Now, could someone please tell me if what I have just said makes any kind of sense?!
- 9 Nov 2010 23:13 UTC; 1 point) 's comment on A note on the description complexity of physical theories by (
I like the use of the quotient set here. In fact, I would go on to use it more comprehensively: not only does our observer-moment define an equivalence class, but any particular context implementing it does, too. It could be a simulation, or a simulation in a simulation in a (...), a small corner of a more general mathematical system, anything. The point is that for any and every defined part, it too will always be part of a quotient; there will always be an indistinguishability of what’s happening below.
As a result of this: does it mean anything to be ‘a simulation’?
My own current thinking is that the Born rule—the everydayness of everyday life—is a reflection of how consciousness must function. I am just not entirely sure how yet...
I just wish the throat swabs didn’t trigger my gag reflex.
I didn’t even know I had a gag reflex until I took one, and it makes the swab pretty useless as I can’t get near my tonsils without having to stop before I throw up.
I have a highly specific vision of a virtual reality heaven. Basically, I would be left alone for all eternity on my personal island
Funnily enough, you’ve just described, for me, a virtual reality hell
Indeed, since there is no absolute distinction between the parts of reality that are ‘you’ and those that aren’t, then solipsism isn’t by itself a meaningful concept.
That’s basically lucid daydreaming, then?
Trying to do that reminded me of something I used to do as a kid: I would watch static on TV, and find myself constructing imagery from it. Usually, it would be like traveling over landscapes, or a rotating/panning view over some entity, and the quality of the visuals would be like line drawings.
The reason I remember that is because my mental visualisations have a very similar quality. After maybe a ‘flash’ of a fairly detailed scene—or at least the suggestion of one—it rapidly devolves into short-lived abstractions, and only where I’m mentally focusing.
Perhaps what I need is to look at some static again and see if it improves visualisation.
And I tried it. Didn’t help :-/
Very interesting post. Perhaps I should mention that there’s a possibility to go to the other extreme; assuming you’re different to everyone else. A lot of very bad pretentious teenage poetry stands as testament to this.