the more quickly computers developed, the less history you’d have to bubble to hide the fact that they develop quickly
If you hide a small portion of history, people can still see that there are computers after and not before.
the more quickly computers developed, the less history you’d have to bubble to hide the fact that they develop quickly
If you hide a small portion of history, people can still see that there are computers after and not before.
Where can I find the LessOnline album?
Where can I download these?
In particular I don’t know how to download “The Ninth Night of November”.
Friday’s far enough for milk to go bad, but it’s near enough for those other considerations.
I naturally read the title as a (shallow/nonsubstantive) reference to the video-game Undertale. It’s at the least a funny coincidence.
Here’s a quote, including spoilers.
‴
* LOVE, too, is an acronym.
* It stands for “Level of Violence.”
* A way of measuring someone’s capacity to hurt.
* The more you kill, the easier it becomes to distance yourself.
* The more you distance yourself, the less you will hurt.
* The more easily you can bring yourself to hurt others.
‴
Yeah, leading hypothesis changes.
The agent (at least effectively) has access to RNG that Murphy knows. I’m not sure if it’s supposed to be allowed private RNG as well.
I think the hypothesis class contains “square-free sequence until n”, for any finite n.
Set n = current index + 2.
It’s very poorly worded, I might be misunderstanding too.
This essay is unironic.
“Cooperate iff I prove my partner cooperates with me” cooperates with itself, by Lob. “Defect iff I prove my opponent defects against me” defects against itself, by Lob. The first beats the latter in a direct contest as well (even with inequal compute, IIRC).
What is the human equivalent?
The Shepard tone seems way too short. Does it get longer for later levels? Or is there some way for me to change it?
Edit: It does.
My previous statements are technically correct, and IMO mostly make a correct point in context (that Truman had not realized, at the time, the immediate consequences of his decision), but are somewhat misleading. Thanks.
The process was still stupid, and not what Truman would have preferred. Truman was surprised and disturbed by the second bomb being dropped so quickly. But it seems like it wouldn’t have been too hard for him to anticipate and prevent this outcome, if he had been paying more attention (the same way he thought Hiroshima was a military base due to his own deficit of curiosity); I hadn’t realized that before, thanks.
Thanks, I hadn’t seen this.
I agree Truman thought Hiroshima was mostly a military base. IIRC you can see him make basic factual errors to that effect in an early draft of a speech.
IIUC, evolution is supposed to accelerate greatly during population growth.
I was doing do-nothing meditation maybe a month ago, managed to switch to a frame (for a few hours) where I felt planning as predicting my actions, and acting as perceiving my actions. IIRC, I exited when my brother-in-law asked me a programming question, ’cause maintaining that state took too much brainpower for me in my inexperience.
I think a lot of human action is simple “given good things happen, what will I do right now?”, which obviously leads to many kinds of problems. (Most obviously:)
It’d be weird for him to take sole credit; he only established full presidential control of nuclear weapons afterward. He didn’t even know about the second bomb until after it dropped.
Truman only made the call for the first bomb; the second was dropped by the military without his input, as if they were conducting a normal firebombing or something. Afterward, he cancelled the planned bombings of Kokura and Niigata, establishing presidential control of nuclear weapons.
We try to make models obedient; it’s an explicit target. If we find that a natural framing, it makes sense AI does too. And it makes sense that that work can be undone.
At least the final chapter has the name wrong.
MIRI has also done work on decision problems outside LDT’s fair problem class, like Open-Source Prisoner’s Dilemma.
FairBot cooperates if it can prove you cooperate, defects otherwise. In this case, being too hard to predict gets you defected against.