Personal website: https://outsidetheasylum.blog/
Feedback about me: https://www.admonymous.co/isaacking
I don’t think that “users active on the site on Petrov day”, nor “users who visited the homepage on Petrov day” are good metrics; someone who didn’t want to press the button would have no reason to visit the site, and they might have not done so either naturally (because they don’t check LW daily) or artificially (because they didn’t want to be tempted or didn’t want to engage with the exercise.) I expect there are a lot of users who simply don’t care about Petrov day, and I think they should still be included in the set of “people who chose not to press the button”.
What about “users who viewed the Petrov day announcement article or visited the homepage”? That should more accurately capture the set of users who were aware of their ability to nuke the homepage and chose not to do so. (It still misses anyone who found out via social media, Manifold, etc., but there’s not much you can do about that.)
Something like that would be much more representative of real defection risks. It’s easy to cooperate with people we like; the hard part is cooperating with the outgroup.
(Good luck getting /r/sneerclub to agree to this though, since that itself would require cooperation.)
It’s difficult to incentivize people to not press the button, but here’s an attempt: If we successfully get through Petrov day without anyone pressing the button (other than the person who has already done so via the bug), I will donate $50 to a charity selected by majority vote.
These are much more creative than mine, good job. I especially liked 8, 12, 27, and 29.
fast plane and steer uprocket shipthrow it really hardextremely light balloonwait for an upwards gust of windtall skyscraperspace elevatorearthquake energy storagereally big tsunamiasteroid impact launchwait for the sun to engulf bothincrease mass of earth enough to make moon crashelevator pulley system with counterweightsupermanrename earth to “the moon”take it to a moon replica on earthtouch it to a moon rock on earthreally big air riflewait for tectonic drift to make a big enough mountainteleporterpoint a particle accelerator upwardsattach to passing neutrinosolar sailnuclear pulse propulsionhack the universemagichit it really hard with a golf clubdrop a heavy weight on a see-sawattach to dolphin leapinghigh buoyancy object in deep oceanput on top of erupting volcanodiet coke and mentosdronenuclear meltdownturn a continent sidewaysproject an image of it onto the moonbuild a large sand pilecoordinate an inverse droplet ring wavethrow it with one of those dog ball launchersmail itattach to next NASA missionwait for the big crunchlarge lassosystem of ropes in orbitdeath starminiaturize and inject into that lizard that shoots blood out of its eye, but mutated so it can shoot blood to the mooninsult it so much it leaves the planetattach to Japanese paper balloonthat thing the asgardians use to travel between planetsinverse parachuteturn off gravityuse negative mass
There’s an experiment — insert obligatory replication crisis disclaimer — where one participant is told to gently poke another participant. The second participant is told to poke the first participant the same amount the first person poked them.It turns out people tend to poke back slightly harder than they were first poked.Repeat.A few iterations later, they are striking each other really hard.
There’s an experiment — insert obligatory replication crisis disclaimer — where one participant is told to gently poke another participant. The second participant is told to poke the first participant the same amount the first person poked them.
It turns out people tend to poke back slightly harder than they were first poked.
A few iterations later, they are striking each other really hard.
Do you know where I could read this study? I was unable to find it online with keywords like “poking”, “escalation”, etc.
A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.
I don’t find the argument you provide for this point at all compelling; your example mechanism relies entirely on human infrastructure! Stick an AGI with a visual and audio display in the middle of the wilderness with no humans around and I wouldn’t expect it to be able to do anything meaningful with the animals that wander by before it breaks down. Let alone interstellar space.
Ah, so mortality almost always trends downwards except when it jumps species, at which point there can be a discontinuous jump upwards. That makes sense, thank you.
Why is it assumed that diseases evolve towards lower mortality? Every new disease is an evolved form of an old disease, so if that trend were true we’d expect no disease to ever have noticeable mortality.
Judging by a quick look at Twitter, this is going to be politically polarized right off the bat, with large swaths of the population immediately refusing vaccines or NPIs. So I think whether this turns into a serious pandemic is going to depend largely on the infectiousness of Monkeypox and not all that much else.
I don’t think that’s what’s happening in the situations I’m thinking about, but I’m not sure. Do you have an example dialogue that demonstrates someone taking a belief literally when it obviously wasn’t intended that that way?
Do you think that conveying my motivation for the question would significantly lower the frequency of miscommunications? If so, why?
I tend to avoid that kind of thing because I don’t want it to bias the response. If I explain my motivations, then their response is more likely to be one that’s trying to affect my behavior rather than convey the most accurate answer. I don’t want to be manipulated in that way, so I try to ask question that people are more likely to answer literally.
From the “interpretation” section of the link I provided:
Truthfulness should be the absolute norm for those who trust in Christ. Our simple yes or no should be completely binding since deception is never an option for us. If an oath is required to convince someone of our honesty or intent to be faithful, it suggests we may not be known for telling the truth in other circumstances.It’s likely that the taking of oaths had become a way of manipulating people or allowing wiggle room to get out of some kinds of contracts. James is definite: For those in Christ, dishonesty is never an option.
Truthfulness should be the absolute norm for those who trust in Christ. Our simple yes or no should be completely binding since deception is never an option for us. If an oath is required to convince someone of our honesty or intent to be faithful, it suggests we may not be known for telling the truth in other circumstances.
It’s likely that the taking of oaths had become a way of manipulating people or allowing wiggle room to get out of some kinds of contracts. James is definite: For those in Christ, dishonesty is never an option.
I travel frequently for my job, and spend >50% of my time away from home. Can any of the existing cryonics organizations handle someone who has about an equal chance of dying in any of the ~200 largest cities in the US and Canada?
What’s the conceptual difference between “running a search” and “applying a bunch of rules”? Whatever rules the cat AI is applying to the image must be implemented by some step-by-step algorithm, and it seems to me like that could probably be represented as running a search over some space. Similarly, you could abstract away the step-by-step understanding of how breadth-first search works and say that the maze AI is applying the rule of “return the shortest path to the red door”.
How could an algorithm know Bob’s hypothesis is more complex?
I think this is supposed to be Alice’s hypothesis?