Yeah, leading hypothesis changes.
David Joshua Sartor
The agent (at least effectively) has access to RNG that Murphy knows. I’m not sure if it’s supposed to be allowed private RNG as well.
I think the hypothesis class contains “square-free sequence until n”, for any finite n.
Set n = current index + 2.It’s very poorly worded, I might be misunderstanding too.
This essay is unironic.
“Cooperate iff I prove my partner cooperates with me” cooperates with itself, by Lob. “Defect iff I prove my opponent defects against me” defects against itself, by Lob. The first beats the latter in a direct contest as well (even with inequal compute, IIRC).
What is the human equivalent?
The Shepard tone seems way too short. Does it get longer for later levels? Or is there some way for me to change it?
Edit: It does.
My previous statements are technically correct, and IMO mostly make a correct point in context (that Truman had not realized, at the time, the immediate consequences of his decision), but are somewhat misleading. Thanks.
The process was still stupid, and not what Truman would have preferred. Truman was surprised and disturbed by the second bomb being dropped so quickly. But it seems like it wouldn’t have been too hard for him to anticipate and prevent this outcome, if he had been paying more attention (the same way he thought Hiroshima was a military base due to his own deficit of curiosity); I hadn’t realized that before, thanks.
Thanks, I hadn’t seen this.
I agree Truman thought Hiroshima was mostly a military base. IIRC you can see him make basic factual errors to that effect in an early draft of a speech.
IIUC, evolution is supposed to accelerate greatly during population growth.
I was doing do-nothing meditation maybe a month ago, managed to switch to a frame (for a few hours) where I felt planning as predicting my actions, and acting as perceiving my actions. IIRC, I exited when my brother-in-law asked me a programming question, ’cause maintaining that state took too much brainpower for me in my inexperience.
I think a lot of human action is simple “given good things happen, what will I do right now?”, which obviously leads to many kinds of problems. (Most obviously:)
It’d be weird for him to take sole credit; he only established full presidential control of nuclear weapons afterward. He didn’t even know about the second bomb until after it dropped.
Truman only made the call for the first bomb; the second was dropped by the military without his input, as if they were conducting a normal firebombing or something. Afterward, he cancelled the planned bombings of Kokura and Niigata, establishing presidential control of nuclear weapons.
We try to make models obedient; it’s an explicit target. If we find that a natural framing, it makes sense AI does too. And it makes sense that that work can be undone.
At least the final chapter has the name wrong.
This is not fixed.
“Everything in my life is perfect right now.”
I couldn’t think about this before, ’cause it was obviously false in 100% of cases. I’ve gained greater understanding now.
“Perfect” is a 3-place word. It asks if a given state of the world is the best of a given set of states, given some values.
Is perfect(my life right now, ???, my values) true? If we take the minimal set as default, we get perfect(my life right now, my life right now, my values), which is obviously true. This isn’t totally unreasonable; there’s only one multiverse in the world, and there’s only one set of things in it I identify with. It’s very.intuitive to just stop there.The sentence on its own doesn’t feel much false or true. But it doesn’t feel inconceivable anymore either.
I feel like I’ve come to the insight backward. I’ll keep meditating, haha.
It’s the “decided” part that’s the problem: beliefs are not supposed to involve any “deciding”.
I can pretty easily shift my perspective such that learning what I’m going to do feels like realizing that my action is overdetermined, rather than like “deciding”, for almost every action (and better meditators can get every action to feel this way). What I do to achieve this is: manually redefine my identity to exclude most of my decision-making process.
Similarly, many people include part of their world-model in their identity, such that learning about the world can feel like deciding something. The world-model’s doing very similar computation to the planner and whatnot, it seems reasonable for some people to include it.There’s priors, there’s evidence, and if it feels like there’s a degree of freedom in what to do with those, then something has probably gone wrong.
Can just as easily say “There’s beliefs, there’s values, and if it feels like there’s a degree of freedom in what to do with those, then something has probably gone wrong.”. There’s only one optimal decision, given a set of beliefs and values. (Of course we’re bounded, but that applies just as well to what we do with beliefs and evidence.)
I always “feel behind”.
I think this is caused by mistaking a 3-place word for a 2-place word. “Behind” takes something like arguments ‘current state’, ‘value function’, ‘schedule distribution’. I think you’ve misplaced the schedule distribution that’s supposed to go here, and are using some silly replacement, because you forgot it was an argument that mattered.
I changed my mind; at least in the case of my sharing information with you, if you were perfectly trustworthy you’d totally just defer to my beliefs for not making me worse off as a result. But, as you said, plausibly even in this easy case being perfect is way too hobbling for humans ’cause of infohazards.
I naturally read the title as a (shallow/nonsubstantive) reference to the video-game Undertale. It’s at the least a funny coincidence.
Here’s a quote, including spoilers.
‴
* LOVE, too, is an acronym.
* It stands for “Level of Violence.”
* A way of measuring someone’s capacity to hurt.
* The more you kill, the easier it becomes to distance yourself.
* The more you distance yourself, the less you will hurt.
* The more easily you can bring yourself to hurt others.
‴